Paper_ID
stringlengths
10
10
Question
stringlengths
201
1.81k
ocr_output
stringlengths
252
54k
gbrHZq07mq
When is the notion of $T(\bar{w}) > 0$ (as introduced in section 2.2), used as the criterion in proof on page 6 and page 7. From this proof, I just see that you can perform LTL operations on input strings, but I am not sure how this shows that a string in the language will never be mapped to a string outside the language?
LOGICAL LANGUAGES ACCEPTED BY TRANSFORMER ENCODERS WITH HARD ATTENTION Pablo Barceló IMC, PUC Chile & IMFD Chile & CENIA Santiago, Chile pbarcelo@uc.cl Alexander Kozachinskiy IMFD and CENIA Santiago, Chile. alexander.kozachinskyi@cenia.cl Anthony Widjaja Lin University of Kaiserslautern-Landau & Max-Planck Institute for Software Systems Kaiserslautern, Germany awlin@mpi-sws.org Vladimir Podolski Tufts University Medford, USA vladimir.podolskii@tufts.edu ABSTRACT We contribute to the study of formal languages that can be recognized by transformer encoders. We focus on two self-attention mechanisms: (1) UHAT (Unique Hard Attention Transformers) and (2) AHAT (Average Hard Attention Transformers). UHAT encoders are known to recognize only languages inside the circuit complexity class AC^0, i.e., accepted by a family of poly-sized and depth-bounded boolean circuits with unbounded fan-ins. On the other hand, AHAT encoders can recognize languages outside AC^0, but their expressive power still lies within the bigger circuit complexity class TC^0, i.e., AC^0-circuits extended by majority gates. We first show a negative result that there is an AC^1-language that cannot be recognized by an UHAT encoder. On the positive side, we show that UHAT encoders can recognize a rich fragment of AC^1-languages, namely, all languages definable in first-order logic with arbitrary unary numerical predicates. This logic, includes, for example, all regular languages from AC^0. We then show that AHAT encoders can recognize all languages of our logic even when we enrich it with counting terms. Using these results, we obtain a characterization of which counting properties are expressible by UHAT and AHAT, in relation to regular languages. 1 INTRODUCTION Transformers have revolutionized natural language processing by facilitating the efficient and effective modeling of intricate contextual relationships within text [Vaswani et al., 2017]. This remarkable capability has sparked numerous investigations into the potential boundaries of transformers’ power [Hahn, 2020; Yao et al., 2021; Pérez et al., 2021; Weiss et al., 2021; Hao et al., 2022; Chiang & Cholak, 2022; Bhattamishra et al., 2020; Chiang et al., 2023; Merrill et al., 2022; Merrill & Sabharwal, 2023; Strobl, 2023]. One natural method for addressing this question is to explore the classes of formal languages that these architectures can recognize. This approach provides an insight into their strengths and limitations. The response to this question naturally relies on the specific features allowed within transformer encoders. These encompass the interplay between encoders and decoders, the kind of functions used for positional encodings and attention mechanisms, and considerations of fixed or unbounded precision, among other factors. While the capacity of transformers that incorporate both encoders and decoders to recognize languages is well understood today (indeed, such architectures are Turing-complete and can thus recognize any computable language [Pérez et al., 2021]), a precise characterization of the languages accepted by transformer encoders is lacking. Unique Hard Attention Transformers (UHAT) are a class of transformer encoders that has been a subject of many recent papers. As was shown by Hao et al. (2022), UHATs recognize only languages in AC^0, i.e., recognized by families of Boolean circuits of unbounded fan-in that have constant depth and polynomial size. Intuitively, this means that UHATs are weak at “counting” (more precisely, reasoning about the number of occurrences of various letters in the input word). For example, consider the following languages: majority and parity. The first one corresponds to the set of words over alphabet \(\{a, b\}\) for which the majority of positions are labeled by \(a\), while the second checks if the number of positions labeled \(a\) is even. That these languages are not in AC\(^0\) follows from a groundbreaking result in circuit complexity theory (Furst et al., 1981; Ajtai, 1983). Hence, they are neither accepted by UHATs. However, which fragment of the AC\(^0\) languages can actually be recognized by UHATs remains an unresolved question. We start by showing that not all AC\(^0\) languages can be accepted by UHATs. This is obtained by combining results of Ajtai (1983) and Hahn (2020). Based on the previous observation, we focus on identifying a rich fragment of AC\(^0\) that can in fact be embedded into the class of UHATs. To achieve this, we use the characterization of AC\(^0\) as the class of languages expressible in FO(All), the extension of first-order logic (FO) with all numerical predicates defined in relation to the linear order of a word (Immerman, 1999). We show that UHATs recognize all languages definable in FO(Mon), the restriction of FO(All) with unary numerical predicates only (Barrington et al., 2005). The logic FO(Mon) is highly expressive. Unlike FO, it can express non-regular languages like \(\{a^n b^n \mid n > 0\}\). Remarkably, it contains all regular languages within AC\(^0\), which includes examples like \((aa)^*\) — a language not definable in FO. Additionally, our result subsumes the result of Yao et al. (2021), where it is shown that Dyck languages of bounded nested depth can be recognized by UHATs. These languages are regular and belong to AC\(^0\), hence they are expressible in FO(Mon). To establish the result that UHATs recognize all languages definable in FO(Mon), we take a slightly circuitous route: rather than directly formulating FO(Mon) sentences as UHATs, we show that each formula in LTL(Mon), the extension of linear temporal logic (LTL) (Clarke et al., 2018) with arbitrary unary numerical predicates, can be equivalently represented as an UHAT. The proof for FO(Mon) then derives from Kamp’s seminal theorem (Kamp, 1968), which establishes the equivalence between languages definable in FO and LTL. The advantage of dealing with LTL, in contrast to FO, lies in the fact that all LTL formulas are unary in nature, i.e., they are interpreted as sets of positions on a word, unlike FO formulas which possess arbitrary arity. This property aligns well with the expressive capabilities of UHATs, facilitating a proof through structural induction. While the fact that UHAT is in AC\(^0\) implies limited counting abilities of such encoders, recent work has shown that a slight extension of the hard attention mechanism can help in recognizing languages outside AC\(^0\) (Hao et al., 2022). Instead of using unique hard attention, this model uses average hard attention (AHAT), which refers to the idea that the attention mechanism returns the uniform average value among all positions that maximize the attention. To what extent does AHAT enrich the counting ability of UHAT? In answering this question, we introduce a logic named LTL(C, +), which is an extension of LTL(Mon) that naturally incorporates counting features. We show that any language that can be defined within LTL(C, +) can also be identified by an AHAT. The logic LTL(C, +) can express interesting languages lying outside AC\(^0\) including majority and parity. More generally, our result implies that AHATs are equipped with a powerful counting ability: all permutation-closed languages over a binary alphabet and all permutation closures of regular languages (which are in general not context-free) can be recognized by AHATs. As a corollary, we provide a characterization of the “counting properties” of regular languages which can be captured by UHAT and AHAT. Two approaches for understanding counting properties of regular languages can be found in formal language theory: (1) Parikh-equivalence, and (2) taking letter-permutation closure. Both approaches “remove” the ordering from the input string, and as a result only the letter-count (more popularly called Parikh images) of an input string matters. In the setting of (1) (e.g. see Parikh [1966], Kozen [1997]), a machine is allowed to try all letter reorderings of the input string \(w\), and accepts iff the machine accepts some letter reordering of \(w\). According to the well-known Parikh’s Theorem (Parikh [1966]), each context-free language can in this way be captured by a regular language, e.g., \(\{0^n 1^n : n \geq 0\}\) can be captured by \((01)^*\). We show in this paper that each regular language is Parikh-equivalent to an UHAT language, despite the fact that PARITY is not in UHAT. In the setting of (2), a machine must accept all letter permutations of an input string \(w\). The letter-permutation closure of a regular language is not necessarily regular, e.g., such a closure language of \((abc)^*\) consists of all strings with the same number of \(a\)'s, \(b\)'s, and \(c\)'s, which is not even context-free. In this setting, although UHAT cannot capture regular languages like PARITY, AHAT can surprisingly capture all regular languages. Related work. There has been very little research on identifying logical languages that can be accepted by transformers. The only example we are aware of is the recent work by Chiang et al. (2023), in which a variant of first-order logic with counting quantifiers is demonstrated to be embeddable into transformer encoders with a soft attention mechanism. The primary distinction between their work and our results is the choice of the attention mechanism. Additionally, the logic examined in their paper does not have access to the underlying word order being considered. This implies that some simple languages, such as $a^*b^*$, which are definable in FO, are not definable in their logic. Due to the space constraints, some of the proofs are omitted and can be found in the online version of this paper (Barceló et al., 2023). 2 BACKGROUND NOTIONS AND RESULTS 2.1 Transformer encoders An encoder layer is a function that takes a sequence of vectors, $v_0, \ldots, v_{n-1}$, in $\mathbb{R}^d$ as input, where $d \geq 0$. It produces an output sequence of vectors of the same length, $v'_0, \ldots, v'_{n-1}$, in $\mathbb{R}^e$, with $e \geq 0$. The length of the sequence, $n$, can be arbitrary, but input and output dimensions, $d$ and $e$, are fixed for an encoder layer. For the first part of the paper we employ a unique hard attention mechanism, meaning that a position only attends to the element with the highest attention score. Formally, an encoder layer with unique hard attention is given by two affine transformations $A, B : \mathbb{R}^d \to \mathbb{R}^d$ and one feed-forward neural network $N : \mathbb{R}^{2d} \to \mathbb{R}^e$ with ReLU activation function. For $i \in \{0, \ldots, n - 1\}$, we set $$a_i \leftarrow v_{j_i},$$ where $j_i \in \{0, \ldots, n - 1\}$ is the minimum element that maximizes the attention score $\langle Av_i, Bv_j \rangle$ over $j \in \{0, \ldots, n - 1\}$. The $a_i$s are often known as attention vectors. After that, we set $$v'_i \leftarrow N(v_i, a_i), \quad i = 0, \ldots, n - 1.$$ It is well-known that feed-forward neural networks with ReLU activation can express the function $\max\{x, y\}$. Thus, we may assume that $N$ can be an arbitrary composition of affine functions with $\max$. Transformer encoder. A unique hard attention transformer encoder (UHAT) is defined simply as the repeated application of encoder layers with unique hard attention. 2.2 Languages accepted by transformer encoders Next, we define how a transformer can be used to accept languages over a finite alphabet. This requires extending transformer encoders with three features: a function for representing alphabet symbols as vectors (which, for the purposes of this paper, we represent as one-hot encodings), another function that provides information about the absolute positions of these symbols within the input word, and a vector that is used for checking whether the word should be accepted or not. The function that provides information about positions is often referred to as a positional encoding, and it is essential for recognizing properties of ordered sequences of vectors. In fact, without positional encoding, encoders treat input sequences as invariant to permutations (Pérez et al., 2021). Consider a finite alphabet $\Sigma$ and let $T$ be an UHAT that takes a sequence of vectors over $\mathbb{R}^d$ as input and converts it into a sequence of vectors over $\mathbb{R}^e$. A language $L \subseteq \Sigma^+$ is accepted by $T$, if there is an embedding function $f : \Sigma \to \mathbb{R}^d$, a positional encoding function $p : \mathbb{N} \times \mathbb{N} \to \mathbb{R}^d$, and a vector $t \in \mathbb{R}^e$, such that for every $\bar{w} \in L$ we have $T'(\bar{w}) > 0$, and for every $w \in \Sigma^+ \setminus L$ we have $T'(\bar{w}) < 0$. Here, $T' : \Sigma^+ \to \mathbb{R}$ is defined as follows. Let $\bar{w} = a_0 \ldots a_{n-1} \in \Sigma^n$, and suppose the output of $T$ when given the input sequence $f(a_0) + p(0, n), \ldots, f(a_{n-1}) + p(n - 1, n)$ is the sequence $v_0, \ldots, v_{n-1}$. Then we set $T'(\bar{w}) = \langle t, v_0 \rangle$. Some of the previous papers, for instance Hao et al. (2022), allow to use in UHAT only rational numbers. We find this too restrictive because functions such as $\cos$ and $\sin$ are widely used in practice. Nevertheless, we stress that our results hold with this restriction, by taking good-enough approximations by rational numbers. 2.3 First order logic on words We assume familiarity with first-order logic (FO). Let $\Sigma$ be a finite alphabet. A word $\bar{w} = a_0 \cdots a_{n-1}$ in $\Sigma^+$ is represented as a structure $S_{\bar{w}}$ whose domain is $\{0, \ldots, n - 1\}$. This structure includes a binary relation $<$ that is interpreted as the linear order on the domain, and for each symbol $a \in \Sigma$, there is a unary relation $P_a$ containing positions $i = 0, \ldots, n - 1$ where $a_i = a$. Given an FO sentence over words, that is, an FO formula without free variables, we denote the language of all words $\bar{w} \in \Sigma^+$ satisfying $S_{\bar{w}} \models \phi$ as $L(\phi)$. If an $L \subseteq \Sigma^+$ satisfies $L = L(\phi)$, for some FO sentence $\phi$, then we say that $L$ is definable in FO. Example 1. First-order logic (FO) enables us to define certain languages of interest. Here, we present an illustrative example. Initially, we recognize that we can employ FO to define a relation $\text{first}(x) := \neg \exists y(y < x)$ that exclusively holds true at the first position of a word. Correspondingly, we can define a relation $\text{last}(x) := \neg \exists y(x < y)$ that holds solely at the last position of the word. Moreover, it is possible to define a binary relation $\text{succ}(x, y) := x < y \land \neg \exists z(x < z \land z < y)$, which defines the successor relation within the domain. With these expressions, we can show that FO is capable of defining the language $(ab)^+$: $$\exists x (\text{first}(x) \land P_a(x)) \land \exists x (\text{last}(x) \land P_b(x)) \land \forall x \forall y (\text{succ}(x, y) \rightarrow (P_a(x) \leftrightarrow P_b(y))).$$ That is, the first symbol of the word is an $a$, the last one is a $b$, every $a$ is followed by a $b$, and every $b$ is preceded by an $a$. □ 2.4 Unary numerical predicates It is known that FO sentences can only define regular languages. In turn, there are regular languages that are not definable in FO. An example is the language $(aa)^*$, which contains those words formed solely by the symbol $a$ that are of even length. However, there is a straightforward extension of FO that can define this language: all we need to do is add unary predicate $\text{even}(x)$, which holds true at position $i$ in a word if and only if $i$ is even. In fact, extending FO with the predicate $\text{even}(x)$ allows us to define the language $(aa)^*$ using the following formula, which indicates that the last symbol in the word satisfies the unary predicate $\text{even}$: $\forall x P_a(x) \land \forall y (\text{last}(y) \rightarrow \text{even}(y))$. The extension of FO with unary numerical predicates can then be useful for defining languages. We define a unary numerical predicate $\Theta$ as an infinite family of functions $\theta_n : \{0, \ldots, n\} \rightarrow \{0, 1\}$, $n > 0$. Given a word $\bar{w}$ in $\Sigma^+$ of length $n$, for $n > 0$, we have that the predicate $\Theta(x)$ holds in position $i$ in $\bar{w}$ if and only if $\theta_n(i) = 1$ (so far, we do not use the value of $\theta_n$ at $n$ as positions are numbered from 0 to $n - 1$. We will use this value in Section 4). Notice that under our definition, the truth of a unary numerical predicate at position $i$ in the word $\bar{w}$ depends not only on $i$ but also on the length of the word $\bar{w}$. As we will explore further, this characteristic is advantageous for defining interesting languages in FO extended with arbitrary unary numerical predicates. Following the literature, we write FO(Mon) for such an extension [Barrington et al., 2005]. Example 2. Consider, for example, the non-regular language $\{a^n b^n | n > 0\}$. We show that it can be expressed in FO(Mon) with the help of a unary numerical predicate $\Theta(x)$ such that $\theta_n(i) = 1$ iff $n$ is even and $i = n/2 - 1$. In fact, it suffices to use the formula: $\exists x (\Theta(x) \land P_a(x) \land \forall y (y < x \rightarrow P_a(y)) \land \forall y (x < y \rightarrow P_b(y)))$. This formula expresses that the middle point $i$ of $\bar{w}$ exists, is labeled as $a$, and all positions smaller than $i$ are also labeled $a$, while all positions larger than $i$ are labeled as $b$. This example illustrates the significance of unary numerical predicates depending on both the position and the length of the word over which the formula is evaluated. □ The definition of the language $L(\phi) \subseteq \Sigma^+$ defined by an FO(Mon) sentence $\phi$ is analogous to the one we provided for FO. 3 AC⁰ languages accepted by UHATs 3.1 Not all languages in AC⁰ are accepted by UHATs. [Hao et al., 2022] proved that languages accepted by UHATs belong to the circuit complexity class AC⁰, i.e., the class of languages accepted by families of Boolean circuits of unbounded fan-in, constant depth, and polynomial size. We combine results by Ajtai (1983) and Hahn (2020) to show that the opposite is not the case, i.e., there are AC^0 languages that are not accepted by UHATs. As shown in Ajtai (1983), there is an AC^0-family of circuits \( \{C_n : \{0, 1\}^n \rightarrow \{0, 1\}\}_{n \in \mathbb{N}} \) such that for all \( n \), the circuit \( C_n \) accepts all strings with at least \( 2n/3 \) ones and rejects all strings with at most \( n/3 \). Consider a language approximate majority, consisting of strings accepted by circuits from \( \{C_n\} \). This language is in AC^0 by construction. However, as we state next, it cannot be recognized by an UHAT. This result is proved by using a property of UHATs established in Hahn (2020). **Proposition 1.** There is no UHAT that accepts the language approximate majority. Viola (2009) shows that \( \{C_n\} \) can be made polynomial-time computable, which implies the existence of a polynomial-time computable language from AC^0 that cannot be accepted by an UHAT. ### 3.2 Main result: FO(Mon) languages are accepted by UHATs Proposition 1 tells us that not all AC^0 languages are accepted by UHATs. In this section, we identify a significant subset of AC^0 languages that can be accepted by UHATs. To accomplish this, we rely on the characterization of the class AC^0 as those languages that can be defined in FO extended with arbitrary numerical predicates. Our main result establishes that as long as we restrict ourselves to unary numerical predicates, translation into UHATs is possible. **Theorem 1.** Let \( \Sigma \) be a finite alphabet and \( \phi \) an FO(Mon) sentence over words from the alphabet \( \Sigma \). There is an UHAT that accepts \( L(\phi) \). Proving this result by induction on FO(Mon) formulas, which would be the most natural approach to tackle the problem, turns out to be difficult. The challenge arises because the FO(Mon) formulas obtained by induction can have arbitrary arity, and transformer encoders do not seem capable of handling the requirements imposed by such formulas. To address this issue, we take a different approach. We employ Kamp’s Theorem, which establishes that the languages definable in FO are precisely those that are definable in linear temporal logic (LTL) (Kamp, 1968). ### 3.3 Using LTL(Mon) to prove our main result We first explain how LTL is defined, as this is crucial to understanding the remainder of the paper. Let \( \Sigma \) be a finite alphabet. LTL formulas over \( \Sigma \) are defined as follows: if \( a \in \Sigma \), then \( a \) is an LTL formula. Additionally, LTL formulas are closed under Boolean combinations. Finally, if \( \phi \) and \( \psi \) are LTL formulas, then \( X\phi \) and \( \phi U \psi \) are also LTL formulas. Here, \( X \) is referred to as the next operator, and \( U \) as the until operator. LTL formulas are unary, i.e., they are evaluated over positions within a word. Let \( \bar{w} = a_0 \cdots a_{n-1} \) be a word in \( \Sigma^+ \), and let \( i = 0, \ldots, n - 1 \). We define the satisfaction of an LTL formula \( \phi \) over \( \bar{w} \) at position \( i \), written as \( (\bar{w}, i) \models \phi \), inductively as follows (omitting Boolean combinations): - \( (\bar{w}, i) \models a \) if and only if \( a = a_i \), for \( a \in \Sigma \). - \( (\bar{w}, i) \models X\phi \) if and only if \( i < n - 1 \) and \( (\bar{w}, i + 1) \models \phi \). In other words, \( \phi \) holds in the next position after \( i \) (if such a position exists). - \( (\bar{w}, i) \models \phi U \psi \) if and only if there exists a position \( j = i, \ldots, n - 1 \) for which \( (\bar{w}, j) \models \psi \) and such that \( (\bar{w}, k) \models \phi \) for every \( k \) with \( i \leq k < j \). That is, \( \phi \) holds starting from position \( i \) until the first position where \( \psi \) holds (and a position where \( \psi \) holds must exist). We can extend LTL with unary numerical predicates in the same way we did it for FO. Formally, we define LTL(Mon) as the extension of LTL with every formula of the form \( \Theta \), for \( \Theta \) a unary numerical predicate. We write \( (\bar{w}, i) \models \Theta \) to denote that \( \theta_n(i) = 1 \), where \( n \) is the length of \( \bar{w} \). If \( \phi \) is an LTL(Mon) formula over \( \Sigma \), we write \( L(\phi) \) for the set of words \( \bar{w} \in \Sigma^+ \) with \( (\bar{w}, 0) \models \phi \). Kamp’s Theorem establishes that for every FO sentence \( \phi \) there exists an LTL formula \( \psi \) such that \( L(\phi) = L(\psi) \), and vice-versa. It is straightforward to see that this property extends to the logics FO(Mon) and LTL(Mon). **Proposition 2.** (Kamp, 1968) For every FO(Mon) sentence \( \phi \) there exists an LTL(Mon) formula \( \psi \) such that \( L(\phi) = L(\psi) \), and vice-versa. Our proof of Theorem 1 is then derived directly from Proposition 2 and the following result. **Proposition 3.** Let $\Sigma$ be a finite alphabet and $\phi$ an LTL(Mon) formula defined over words from the alphabet $\Sigma$. There is an UHAT $T$ that accepts $L(\phi)$. Before proving this result, we make the following important remark regarding the positional encoding $p$ used by $T$ to accept $L(\phi)$. On a pair $(i, n) \in \mathbb{N} \times \mathbb{N}$ with $i < n$, we have that $p(i, n)$ is composed of elements $i$, $\frac{1}{(i+1)}$, $(-1)^i$, $\cos\left(\pi(1-2^{-i})/10\right)$, $\sin\left(\pi(1-2^{-i})/10\right)$, and $\theta_n(i)$, for every unary numerical predicate $\Theta$ mentioned in $\phi$. **Proof of Proposition 3.** Let $\phi$ be a formula of LTL(Mon). We say that a UHAT realizes $\phi$ position-wise if, given a word $\bar{w} = a_0 \ldots a_{n-1} \in \Sigma^+$, the UHAT outputs a sequence: $$\mathbb{I}\{(\bar{w}, 0) \models \phi\}, \mathbb{I}\{(\bar{w}, 1) \models \phi\}, \ldots, \mathbb{I}\{(\bar{w}, n-1) \models \phi\};$$ that is, a binary word indicating for which positions $\phi$ is true on $\bar{w}$ and for which is false. We show by structural induction that every LTL(Mon) formula is realizable position-wise by some UHAT. Let us consider first the base cases. If $\phi = a$, for some $a \in \Sigma$, our goal is to obtain a sequence: $$\mathbb{I}\{a_0 = a\}, \mathbb{I}\{a_1 = a\}, \ldots, \mathbb{I}\{a_{n-1} = a\}.$$ This can easily be achieved by using a one-hot encoding as the embedding function. In turn, if $\phi = \Theta$, for $\Theta$ a unary numerical predicate, then $\phi$ can be realized position-wise using the corresponding positional encoding $p(i, n) = \theta_n(i)$. We continue with Boolean combinations. They can be implemented position-wise by compositions of affine transformation and max as follows: $\neg x = 1 - x$ and $x \lor y = \max\left\{\frac{2x-1}{2}, \frac{2y-1}{2}\right\} + 1$. For the cases when our formula is of the form $X\phi$ or $\phi \cup \psi$, we need the following lemma. **Lemma 1.** There is an UHAT that transforms each $x_0, \ldots, x_{n-1} \in \{0, 1\}$ as follows: $$x_0, \ldots, x_{n-2}, x_{n-1} \mapsto x_0, \ldots, x_{n-2}, 0.$$ Let us assume now that our formula is of the form $X\phi$. It is enough to design a unique hard attention layer in which attention is always maximized at the next position. More precisely, we construct an UHAT that outputs a sequence of vectors $v_1, \ldots, v_n \in \mathbb{R}^3$, and a linear transformation $A : \mathbb{R}^3 \to \mathbb{R}^3$, such that $\arg\max_{j \in \mathbb{N}} \langle Av_i, v_j \rangle = \{i + 1\}$, for $i = 0, \ldots, n - 2$. This will allow us to “send” $\mathbb{I}\{(\bar{w}, i + 1) \models \phi\} = \mathbb{I}\{(\bar{w}, i) \models X\phi\}$ to the $i$th position, for $i = 0, \ldots, n - 2$. It only remains then to apply Lemma 1 to obtain $0 = \mathbb{I}\{(\bar{w}, n - 1) \models X\phi\}$ at the last position. Using our positional encoding and an affine position-wise transformation, we can obtain: $$v_i = \left(\cos\left(\frac{\pi(1-2^{-i})}{10}\right), \sin\left(\frac{\pi(1-2^{-i})}{10}\right), (-1)^i \cdot 10\right).$$ Let $A$ be a linear transformation that inverts the sign of the third coordinate. Observe that: $$\langle Av_i, v_j \rangle = \cos\left(\frac{\pi(2^{-i} - 2^{-j})}{10}\right) + (-1)^{i+j+1} \cdot 10.$$ We claim that, for a fixed $i$, this quantity is maximized at $j = i + 1$. First, those $j$s that have the same parity as $i$ (in particular, $j = i$) cannot achieve the maximum because the second term is $-10$. For $j$s with a different parity, we have $\langle Av_i, v_j \rangle = \cos\left(\pi(2^{-i} - 2^{-j})/10\right) + 10$. Since all angles are in $[-\pi/10, \pi/10]$, this quantity is maximized when $|2^{-i} - 2^{-j}|$ is minimized. For $j < i$, the last quantity is at least $2^{-i}$, and for $j > i$, the minimum of this quantity is $2^{-i-1}$, achieved at $j = i + 1$. Let us finally assume that our formula is of the form $\phi \cup \psi$. Observe that the value of $\phi \cup \psi$ at position $i$ can be computed as follows: we go to the right, starting from the $i$th position, until we see a position $j_i$ where either $\phi$ is false, or $\psi$ is true, or this is the last position. Then $\phi \cup \psi$ holds at $i$ if and only if $\psi$ holds at $j_i$. That is, $(\bar{w}, i) \models \phi \cup \psi$ if and only if $(\bar{w}, j_i) \models \psi$, where $j_i \in \{i, \ldots, n - 1\}$ is the minimal position with $\tau(j) = 1$, where, in turn, the sequence $\tau$ is defined by $\tau(i) = \mathbb{I}\{(\bar{w}, i) \models \neg \phi \lor \psi\} \lor \mathbb{I}\{i = n - 1\}$, for $i = 0, \ldots, n - 1$. To compute the sequence $\tau$, we first compute $\phi \land \neg \psi$. position-wise (we can do that because we already have \( \phi \) and \( \psi \) at our disposal), then we add the conjunction with \( I\{i \neq n - 1\} \) by Lemma 1 and then we take the negation of the resulting sequence. To show the lemma, it is enough to create a unique hard attention layer, where for every position \( i \) the attention is maximized at \( j_i \). Using our positional encoding and the induction hypothesis, we can obtain a sequence of vectors \( v_1, \ldots, v_n \in \mathbb{R}^4 \) such that: \[ v_i = \left( \cos \left( \frac{\pi (1 - 2^{-i})}{10} \right), \sin \left( \frac{\pi (1 - 2^{-i})}{10} \right), 1, \tau(i) \right). \] Consider a linear transformation \( B : \mathbb{R}^4 \to \mathbb{R}^4 \) such that \[ Bv_i = \left( \cos \left( \frac{\pi (1 - 2^{-i})}{10} \right), \sin \left( \frac{\pi (1 - 2^{-i})}{10} \right), 10\tau(i), 0 \right). \] Observe that \( \langle v_i, Bv_j \rangle = \cos \left( \frac{\pi (2^{-i} - 2^{-j})}{10} \right) + 10\tau(j) \). We claim that this expression is maximized at \( j = j_i \). First, because of the last term in it, it cannot be maximized at \( j \) with \( \tau(j) = 0 \). It remains to show that among the \( j \)'s with \( (\bar{w}, j) \not\models \phi \), this quantity is minimized on the minimal \( j \) which is at least \( i \). In fact, in this case we have \( \langle v_i, Bv_j \rangle = \cos \left( \frac{\pi (2^{-i} - 2^{-j})}{10} \right) + 10 \). All the angles in question are in \([-\pi/10, \pi/10]\), so the cosine is maximized when \(|2^{-i} - 2^{-j}|\) is minimized. Now, this absolute value is at least \(2^{-i}\) when \( j < i \). In turn, this absolute value is smaller than \(2^{-i}\) for \( j \geq i \), and it is the smaller the smaller is \( j \), as required. ### 3.4 Applications of Our Main Result We show two applications of our main result. First, UHATs accept all regular languages in AC<sup>0</sup>. Second, UHATs are strictly more expressive than regular and context-free languages in terms of the acceptance of languages up to letter-permutation. #### Regular languages in AC<sup>0</sup> There is an important fragment of FO(Mon) which is interesting in its own right. This is the logic FO(Mod), i.e., the extension of FO with unary numerical predicates of the form Mod<sub>p,r</sub>, for \( p > 1 \) and \( 0 \leq r \leq p - 1 \). We have that Mod<sub>p,r</sub>(i) = 1 if and only if \( i \equiv r \pmod{p} \). In fact, by using a characterization given in Barrington et al. (1992), one can show that the languages definable in FO(Mod) are precisely the regular languages within AC<sup>0</sup>. Then: **Corollary 1.** Let \( L \subseteq \Sigma^+ \) be a regular language in AC<sup>0</sup>. There is an UHAT that accepts \( L \). #### Recognizing regular languages up to letter-permutation Although not all regular languages are accepted by UHATs (e.g., parity), we can use Theorem 1 to show that, up to letter-permutation, UHAT is in fact strictly more powerful than regular and context-free languages. To formalize our result, we recall the notion of semilinear sets and the Parikh image of a language. A **linear set** \( S \) is a subset of \( \mathbb{N}^d \) (for some positive integer \( d \), called dimension) of the form \( v_0 + \sum_{i=1}^{r} v_i N := \{ v_0 + \sum_{i=1}^{r} k_i v_i : k_1, \ldots, k_r \in \mathbb{N} \} \) for some vectors \( v_0, \ldots, v_r \in \mathbb{N}^d \). A **semilinear set** \( S \) over \( \mathbb{N}^d \) is a finite union of linear sets over \( \mathbb{N}^d \). Semilinear sets have a very tight connection to formal languages through the notion of the **Parikh image** a language \( L \) (Parikh, 1960), which intuitively corresponds to the set of “letter-counts” of \( L \). More precisely, consider the alphabet \( \Sigma = \{a_1, \ldots, a_d\} \) and a language \( L \) over \( \Sigma \). For a word \( w \in \Sigma \), let \( |w|_{a_i} \) denotes the number of occurrences of \( a_i \) in \( w \). The **Parikh image** \( P(L) \) of \( L \) is defined to be the set of tuples \( v = (|w|_{a_1}, \ldots, |w|_{a_d}) \in \mathbb{N}^d \) for some word \( w \in L \). For example, if \( L = \{a^n b^n : n \geq 0\} \) and \( L' = (ab)^* \), then \( P(L) = P(L') \). In this case, we say that \( L \) and \( L' \) are **Parikh-equivalent**. Note that \( L' \) is regular, while \( L \) is context-free but not regular. This is not a coincidence based on the celebrated Parikh’s Theorem (cf. Parikh, 1966, also see Kozen, 1997). **Proposition 4** (Parikh, 1966). The Parikh images of both regular and context-free languages coincide with semilinear sets. In other words, although context-free languages are strict superset of regular languages, they are in fact equally powerful up to letter-permutation. What about UHATs? We have that they are strictly more powerful than regular and context-free languages up to letter-permutation. **Proposition 5.** Each regular language has a Parikh-equivalent language accepted by an UHAT. In turn, there is an UHAT language with no Parikh-equivalent regular language. 4 LANGUAGES BEYOND AC\(^0\) Transformer encoders with unique hard attention can only recognize languages in AC\(^0\), but a slight extension of the attention mechanism allows to recognize languages lying outside such a class (Hao et al., 2022). In this section, we show that in fact such an extended model can recognize all languages definable in a powerful logic that extends LTL with counting features. This logic can express interesting languages outside AC\(^0\), such as majority and parity. 4.1 AVERAGE HARD ATTENTION For the results in this section, we consider an extended version of transformer encoders that utilize an average hard attention mechanism (Pérez et al., 2021; Hao et al., 2022). Following the literature, we call these AHAT. **Encoder layer with average hard attention.** As before, these layers are defined by two affine transformations, \(A, B : \mathbb{R}^d \rightarrow \mathbb{R}^d\) and one feed-forward neural network \(N : \mathbb{R}^{2d} \rightarrow \mathbb{R}^e\) with ReLU activation function. For every \(i \in \{0, \ldots, n-1\}\), we define \(S_i\) as the set of positions \(j \in \{0, \ldots, n-1\}\) that maximize \(\langle Av_i, Bv_j \rangle\). We then set \[ a_i \leftarrow \left( \sum_{j \in S_i} v_j \right) / |S_i|. \] After that, we set \(v'_i \leftarrow N(v_i, a_i)\), for each \(i = 0, \ldots, n-1\). That is, attention scores under average hard attention return the uniform average value among all positions that maximize attention. We also use future positional masking that allows us to take into account only positions up to \(i\). If the future positional masking is used, the sets \(S_i\) are defined as sets of positions \(j \in \{0, 1, \ldots, i\}\) that maximize \(\langle Av_i, Bv_j \rangle\). Positional masks have been employed on several occasions in theoretical papers (Yao et al., 2021; Bhattamishra et al., 2020; Hao et al., 2022) as well as in practice, for example, for training GPT-2 (Radford et al., 2019). 4.2 LTL EXTENDED WITH COUNTING TERMS We present here LTL(C, +), an extension of LTL(Mon) that allows us to define counting properties over words in a simple manner. This requires the introduction of counting terms as defined next. **Counting terms.** Suppose \(\phi\) is a unary formula. Then \(\# \phi\) and \(\# \phi\) are counting terms. The interpretation of these terms in position \(i\) of a word \(\bar{w}\) of length \(n\) is defined as follows: \[ \# \phi(\bar{w}, i) = |\{j \in \{0, \ldots, i\} \mid (\bar{w}, j) \models \phi\}|, \quad \# \phi(\bar{w}, i) = |\{j \in \{i, \ldots, n-1\} \mid (\bar{w}, j) \models \phi\}|. \] That is, \(\# \phi(\bar{w}, i)\) is the number of positions to the left of \(i\) (including \(i\)) that satisfy \(\phi\), while \(\# \phi(\bar{w}, i)\) is the number of positions to the right of \(i\) (including \(i\)) that satisfy \(\phi\). Notice that, for words of length \(n\), counting terms take values in \(\{0, 1, \ldots, n\}\). **Counting formulas.** With counting terms and unary numerical predicates we can create new formulas in the following way. Let \(\phi\) be a unary formula and \(\Theta\) a unary numerical predicate. We define new formulas \(\Theta(\# \phi)\) and \(\Theta(\# \phi)\). The interpretation of such formulas on position \(i\) of a word \(\bar{w}\) of length \(n\) is as follows: \[ (\bar{w}, i) \models \Theta(\# \phi) \iff \theta_n(\# \phi(\bar{w}, i)) = 1 \quad (\bar{w}, i) \models \Theta(\# \phi) \iff \theta_n(\# \phi(\bar{w}, i)) = 1. \] That is, the number of positions to the left (resp., right) of \(i\) (including \(i\)) that satisfy \(\phi\) satisfies the predicate \(\Theta\). As counting terms can take value \(n\), the value of \(\theta_n\) on \(n\) becomes useful. We also incorporate into our logic the possibility of checking linear inequalities with integer coefficients over counting terms. More specifically, for any finite set of unary formulas \(\phi_1, \ldots, \phi_k, \psi_1, \ldots, \psi_k\) and for any coefficients \(c_1, \ldots, c_k, d_1, \ldots, d_k \in \mathbb{Z}\) we can create a formula: \[ \sum_{j=1}^k c_j \cdot \# \phi_j + \sum_{j=1}^k d_j \cdot \# \psi_j \geq 0, \] which is interpreted as follows: \[ (\bar{w}, i) \models \sum_{j=1}^k c_j \cdot \# \phi_j + \sum_{j=1}^k d_j \cdot \# \psi_j \geq 0 \iff \sum_{j=1}^k c_j \cdot \# \phi_j(\bar{w}, i) + \sum_{j=1}^k d_j \cdot \# \psi_j(\bar{w}, i) \geq 0. \] The logic LTL(C, +). We denote by LTL(C, +) the logic that is recursively defined as follows: - Every formula LTL(Mon) is also an LTL(C, +) formula. - Boolean combinations of LTL(C, +) formulas are LTL(C, +) formulas. - If \( \phi \) and \( \psi \) are LTL(C, +) formulas, then so are \( X\phi \) and \( \phi U\psi \). - If \( \phi \) is an LTL(C, +) formula and \( \Theta \) is a unary numerical predicate, then \( \Theta(\# \phi) \) and \( \Theta(\#\phi) \) are LTL(C, +) formulas. - If \( \phi_1, \ldots, \phi_k, \psi_1, \ldots, \psi_k \) are formulas of LTL(C, +), then \( \sum_{j=1}^{k} c_j \cdot \# \phi_j + \sum_{j=1}^{k} d_j \cdot \# \psi_j \geq 0 \), is a formula of LTL(C, +). 4.3 LTL(C) definable languages are accepted by encoders Next, we state the main result of this section: languages definable by LTL(C, +) formulas are accepted by transformer encoders with average hard attention. **Theorem 2.** Let \( \Sigma \) be a finite alphabet and \( \phi \) an LTL(C, +) formula defined over words from the alphabet \( \Sigma \). There is an AHAT \( T \) that accepts \( L(\phi) \). As a corollary to Theorem 2, we show that AHATs are rather powerful in counting. To make this claim more formal, we study permutation-closed languages, i.e., languages \( L \) such that \( \bar{v} \in L \) iff any letter-permutation of \( \bar{v} \) is in \( L \). For a language \( L \), we write \( \text{perm}(L) \) to be the permutation-closure of \( L \), i.e., \( \text{perm}(L) = \{ \bar{w} : P(\bar{w}) = P(\bar{v}), \text{ for some } \bar{v} \in L \} \). Observe that \( \text{perm}((abc)^*) \) consists of all strings with the same number of occurrences of \( a \), \( b \), and \( c \); this is not even context-free. Owing to Parikh’s Theorem, to recognize \( \text{perm}(L) \), where \( L \) is a regular language, an ability to perform letter-counting and linear arithmetic reasoning (i.e. semilinear set reasoning) is necessary. AHATs possess such an ability, as shown by the following corollary. **Corollary 2.** The permutation closure \( \text{perm}(L) \) of any regular language \( L \) is accepted by an AHAT. Moreover, any permutation-closed language over a binary alphabet is accepted by an AHAT. Both majority and parity are permutation-closed and are over a binary alphabet. Hence, by the previous result, they are both accepted by AHATs. While for majority this was known (Hao et al., 2022), the result for parity is new. 5 CONCLUSIONS AND FUTURE WORK We have conducted an investigation of the problem of which languages can be accepted by transformer encoders with hard attention. For UHATs, we have demonstrated that while they cannot accept all languages in AC<sup>0</sup>, they can still accept all languages in a ‘monadic’ version of it defined by the logic FO(Mon). Crucial to the proof of this result is the equivalence between FO and LTL, as provided by Kamp’s Theorem. In turn, we have shown that AHATs are capable of expressing any language definable in a powerful counting logic, LTL(C, +), that can express properties beyond AC<sup>0</sup>. This implies, among other things, that the parity language can be accepted by an AHAT. In a work of Angluin et al. (2023), contemporaneous to ours, it was shown that the logic FO(Mon) exactly captures languages, accepted by UHATs with positional masking and finite-valued positional encoding. At the same time, with our technique, it can be shown that there are languages beyond FO(Mon) that can be accepted by UHATs with infinite-valued positional encoding, for instance, the language of palindromes. The idea is that since we do not use positional masking, we can use arbitrary order on positions, in particular, one that puts the language of palindromes into FO(Mon). Due to the space constraints, we omit a more detailed discussion of this. ACKNOWLEDGMENTS Barceló and Kozachinskiy are funded by ANID–Millennium Science Initiative Program - ICN17002 and by the National Center for Artificial Intelligence CENIA FB210017, Basal ANID. Anthony Lin was supported by European Research Council under European Union’s Horizon research and innovation programme (grant agreement no 101089343). REFERENCES Miklós Ajtai. $\sum_1^1$-formulae on finite structures. *Ann. Pure Appl. Log.*, 24(1):1–48, 1983. Dana Angluin, David Chiang, and Andy Yang. Masked hard-attention transformers and boolean rasp recognize exactly the star-free languages, 2023. Pablo Barceló, Alexander Kozachinskiy, Anthony Widjaja Lin, and Vladimir Podolskii. Logical languages accepted by transformer encoders with hard attention. *arXiv preprint arXiv:2310.03817*, 2023. David A. Mix Barrington, Kevin J. Compton, Howard Straubing, and Denis Thérien. Regular languages in nc$^1$. *J. Comput. Syst. Sci.*, 44(3):478–499, 1992. David A. Mix Barrington, Neil Immerman, Clemens Lautemann, Nicole Schweikardt, and Denis Thérien. First-order expressibility of languages with neutral letters or: The crane beach conjecture. *J. Comput. Syst. Sci.*, 70(2):101–127, 2005. Satwik Bhattamishra, Kabir Ahuja, and Navin Goyal. On the ability and limitations of transformers to recognize formal languages. In *EMNLP*, pp. 7096–7116, 2020. David Chiang and Peter Cholak. Overcoming a theoretical limitation of self-attention. In *ACL*, pp. 7654–7664, 2022. David Chiang, Peter Cholak, and Anand Pillay. Tighter bounds on the expressivity of transformer encoders. In *ICML*, volume 202, pp. 5544–5562. PMLR, 2023. Edmund M. Clarke, Orna Grumberg, Daniel Kroening, Doron A. Peled, and Helmut Veith. *Model checking, 2nd Edition*. MIT Press, 2018. Merrick L. Furst, James B. Saxe, and Michael Sipser. Parity, circuits, and the polynomial-time hierarchy. In *FOCS*, pp. 260–270. IEEE Computer Society, 1981. Michael Hahn. Theoretical limitations of self-attention in neural sequence models. *Trans. Assoc. Comput. Linguistics*, 8:156–171, 2020. Yiding Hao, Dana Angluin, and Robert Frank. Formal language recognition by hard attention transformers: Perspectives from circuit complexity. *Trans. Assoc. Comput. Linguistics*, 10:800–810, 2022. Neil Immerman. *Descriptive complexity*. Graduate texts in computer science. Springer, 1999. H. W. Kamp. *Tense Logic and the Theory of Linear Order*. PhD thesis, University of California, Los Angeles, 1968. Dexter C. Kozen. *Automata and Computability*. Springer, 1997. William Merrill and Ashish Sabharwal. The parallelism tradeoff: Limitations of log-precision transformers, 2023. William Merrill, Ashish Sabharwal, and Noah A. Smith. Saturated transformers are constant-depth threshold circuits. *Trans. Assoc. Comput. Linguistics*, 10:843–856, 2022. URL https://transacl.org/ojs/index.php/tacl/article/view/3465 Rohit Parikh. On context-free languages. *J. ACM*, 13(4):570–581, 1966. doi: 10.1145/321356.321364. URL https://doi.org/10.1145/321356.321364 Jorge Pérez, Pablo Barceló, and Javier Marinkovic. Attention is turing-complete. *J. Mach. Learn. Res.*, 22:75:1–75:35, 2021. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. *OpenAI blog*, 1(8):9, 2019. Lena Strobl. Average-hard attention transformers are constant-depth uniform threshold circuits, 2023.
p14iRzavpt
Though equations (8) and (9) are well motivated, a more rigorous theoretical justification (via bounding) will strengthen the claims of the proposed objective, particularly in relation to the statements that the final learned student model and the second term of (9) becoming zero.
Knowledge Distillation with Perturbed Loss: From a Vanilla Teacher to a Proxy Teacher Anonymous authors Paper under double-blind review Abstract Knowledge distillation is a popular technique to transfer knowledge from large teacher models to a small student model. Typically, the student learns to imitate the teacher by minimizing the KL divergence of its output distribution with the teacher’s output distribution. In this work, we argue that such a learning objective is sub-optimal because there exists a discrepancy between the teacher’s output distribution and the ground truth label distribution. Therefore, forcing the student to blindly imitate the unreliable teacher output distribution leads to inferior performance. To this end, we propose a novel knowledge distillation objective PTLoss by first representing the vanilla KL-based distillation loss function via a Maclaurin series and then perturbing the leading-order terms in this series. This perturbed loss implicitly transforms the original teacher into a proxy teacher with a distribution closer to the ground truth distribution. We establish the theoretical connection between this “distribution closeness” and the student model generalizability, which enables us to select the PTLoss’s perturbation coefficients in a principled way. Extensive experiments on multiple natural language processing tasks demonstrate the effectiveness of PTLoss with teachers of different scales. 1 Introduction Knowledge distillation (KD) is a widely-used technique to transfer knowledge from large teacher models into a much smaller student model with minimum sacrifice of teacher model’s predictive power (Buciluă et al., 2006; Hinton et al., 2015). The vanilla training objective in KD such as KL loss (Hinton et al., 2015; Menon et al., 2021; Stanton et al., 2021) encourages the student’s outputs to be close to the teacher’s outputs as much as possible, which implicitly assumes the teacher’s outputs on the distillation data are perfect. However, the teacher’s output distributions can be biased from the ground truth due to various factors, such as the inductive bias encoded in the teacher model architecture, miscalibration in the training procedure (Menon et al., 2021), or the bias in the teacher model training set (Liu et al., 2021; Łukasik et al., 2021). Enforcing the student to blindly imitate the teacher’s outputs can make the student inherit such biases and produce suboptimal predictions. To overcome this challenge, one common approach (Hinton et al., 2015) suggests scaling the teacher’s logits via a temperature parameter. A proper temperature value can enhance the quality of the teacher model’s output distribution by making it closer to the true label distribution (Menon et al., 2021). However, the shifting space offered by temperature scaling is limited, and the optimal temperature value relies on resource-intensive grid search. Along a separate line, label smoothing (Szegedy et al., 2016) is proposed to regularize the neural networks, and modulated loss functions (Lin et al., 2017; Leng et al., 2022) are designed to address various statistical issues in model training such as overfitting and data imbalance. Despite their potential, there is a lack of work that explores tailoring such techniques for more robust knowledge distillation. In this study, we propose PTLoss for knowledge distillation, which generalizes the vanilla KL loss function and implicitly creates a debiased teacher distribution closer to the ground truth (as shown in Figure 1). Instead of forcing an out-and-out imitation of the original teacher model, PTLoss moderates the distillation objective by adding perturbations to the standard KL loss. Specifically, we first represent the KL loss using a Maclaurin series and then perturb its leading-order terms to construct a more flexible learning objective. This manipulation enables consequential adjustments to the teacher’s output distribution. To determine the perturbation extent, we compute the equivalent PTLoss implicitly shifts $p^t$ to $p^{t_{pz}}$ such that $\|p^* - p^t\|_2 > \|p^* - p^{t_{pz}}\|_2$ Figure 1: PTLoss implicitly transforms the original teacher into a proxy teacher with a distribution closer to the ground truth distribution. This approach addresses the issue of sub-optimal student models resulting from discrepancies between the teacher’s output distribution and the ground truth distribution. By introducing perturbation to standard KL loss represented by its Maclaurin series, we obtain a better proxy teacher, which leads to a more effectively distilled student. distribution of this implicitly shifted teacher’s output distribution after perturbations (named “proxy teacher”) and measure the empirical deviation between the proxy teacher and the ground truth data. It leads to a systematic searching strategy for the perturbation coefficients — the near-optimal perturbation coefficients should minimize the deviation between the distillation risk and the population risk on the validation set. Theoretically, we justify the effectiveness of PTLoss by proving that it can reduce the deviation from the distillation risk compared to KL loss. We draw a connection between the PTLoss and other perturbation methods (e.g., temperature scaling [Hinton et al., 2015], label smoothing [Szegedy et al., 2016], and focal loss [Lin et al., 2017]). We illustrate that the PTLoss can debias the teacher to produce higher-fidelity outputs via a finer-grained perturbation, while subsuming existing perturbation techniques as special cases. Experiments on multiple datasets with different-sized teacher models demonstrate the empirical advantages of the PTLoss. Contributions. In summary, we make the following contributions: (1) A new knowledge distillation loss function PTLoss, which formulates the vanilla KD loss in the form of Maclaurin series and perturbs it to improve the fidelity of teacher models; (2) A principled method to compute the proxy teacher for determining the perturbation coefficients in PTLoss; (3) Theoretical analysis on why PTLoss can lower the distillation risk bound; and (4) Comprehensive experiments on multiple NLP datasets with different-sized teacher models showing the advantage of PTLoss. 2 PRELIMINARIES Multi-class Classification. In a multi-class classification problem with $C$ classes, we are given a set of training examples $D = \{(x_n, y_n)\}_{n=1}^N$ where input $x_n \in X$ and output $y_n$ is a one-hot vector in $Y = \{y | y \in \{0, 1\}^C, 1^T y = 1\}$ indicating the target label of example $x_n$. The goal is to learn a probability predictor $p : X \rightarrow \mathbb{R}^C$ by optimizing the below minimal risk: $$R(p) = \mathbb{E}_{(x,y)}[\ell(y, p(x))].$$ where $\ell(y, p(x))$ is the loss of predicting $p(x)$ when the true label of example $x$ is $y$. A canonical loss function is the cross-entropy loss: $\ell_{CE}(y, p(x)) = -y \log(p(x))$ and we may further approximate the above risk via the empirical risk on the training set $D$: $$\bar{R}(p; D) = \frac{1}{N} \sum_{n=1}^N y_n(-\log(p(x_n))).$$ Our Problem Formulation. In this work, we study the knowledge distillation problem where the labeled training set \( \mathcal{D} \) is inaccessible\(^1\). Specifically, we are only given an unlabeled distillation set \( \mathcal{D}_u \), a teacher model \( p^t \), and asked to learn a student model \( p^s \). Standard Distillation Strategy. A standard KD strategy [Hinton et al., 2015] is to replace the ground truth one-hot label \( y_n \) in Eq. 2 with the teacher model’s output probabilistic label estimate \( p^t(x_n) \) and utilize the KL divergence loss to learn the student model \( p^s \) via the distillation empirical risk: \[ \tilde{R}_{KL}(p^s; p^t; \mathcal{D}_u) = \frac{1}{N_u} \sum_{n=1}^{N_u} \ell_{KL}(p^t(x_n), p^s(x_n)), \] where \( N_u = |\mathcal{D}_u| \) and \( \ell_{KL}(p, q) = KL(p||q) = p^T \log(p) - p^T \log(q) \). 3 Perturbed Distillation Loss Using the KL divergence loss (in short “KL loss”) for distillation essentially assumes the teacher model is perfect and forces the student model to mimic the teacher’s output label distribution. In reality, the teacher model can produce a biased estimate of label distribution and lead to a sub-optimal student model, as demonstrated by both theoretical analysis [Menon et al., 2021] and empirical observations [Müller et al., 2019] (as well as our experiments in Section 5.1). In this work, we present a new distillation loss that generalizes the standard KL loss to accommodate various degrees of distribution gaps between the biased teacher’s output distribution and the underlying ground truth distribution. Inspired by the PolyLoss [Leng et al., 2022], we propose to first replace the logarithmic terms in the standard KL loss with their corresponding Maclaurin series and then perturb the polynomial terms as follows: \[ \log(x) = - \sum_{m=1}^{\infty} \frac{(1-x)^m}{m} \quad \text{Perturb polynomial term coefficients} \quad \log(x) \approx - \sum_{m=1}^{\infty} \left( \frac{1}{m} + \epsilon_m \right)(1-x)^m \] Here, we essentially replace the original coefficient \( \frac{1}{m} \) of the \( m \)-th order polynomial term in the standard KL loss to \( \left( \frac{1}{m} + \epsilon_m \right) \). By further replacing the logarithmic terms in standard KL loss (Eq. 3) with the above Eq. 4, we will have: \[ \ell_{KL}(p^s(x_n), p^t(x_n)) = -H(p^t(x_n)) + \sum_{c \in [C]} p^t_c(x_n) [-\log p^s_c(x_n)] \] \[ \approx -H(p^t(x_n)) + \sum_{c \in [C]} p^t_c(x_n) \left[ -\log p^s_c(x_n) + \sum_{m=1}^{\infty} \epsilon_{c,m} (1-p^s_c(x_n))^m \right], \] where \( p^t_c(x_n) \) and \( p^s_c(x_n) \) denote the probability that example \( x_n \) belongs to the class \( c \) according to the teacher (student) model, and \( H(p^t(x_n)) \) is the entropy of the teacher output distribution. We can further separate out the perturbation coefficients on the right hand side of Eq. 5 and merge \( \sum_{c \in [C]} p^t_c(x_n) [-\log p^s_c(x_n)] \) with \( H(p^t(x_n)) \) to obtain our perturbed distillation loss: \[ \ell_{PT}(p^t(x_n), p^s(x_n)) = \ell_{KL}(p^t(x_n), p^s(x_n)) + \sum_{c \in [C]} p^t_c(x_n) \sum_{m=1}^{\infty} \epsilon_{c,m} (1-p^s_c(x_n))^m. \] The above equation presents our perturbed distillation loss in its most general form. In practice, however, we cannot tune infinite number of coefficients \( \epsilon_{c,m} \) and thus we propose to only tune the first \( M \) leading polynomial coefficients while keeping the rest unchanged as follows: \[ \ell_{PT-M}(p^t(x_n), p^s(x_n)) = \ell_{KL}(p^t(x_n), p^s(x_n)) + \sum_{c \in [C]} p^t_c(x_n) \sum_{m=1}^{M} \epsilon_{c,m} (1-p^s_c(x_n))^m. \] We can see that if we set all \( \epsilon_{c,m} \) to 0, the \( \ell_{PT} \) falls back to the \( \ell_{KL} \) and thus the perturbed distillation loss can be considered as a generalization of the standard KL loss. --- 1This setting reflects the real-world scenario where large teacher models (e.g., ChatGPT [OpenAI, 2022] and GPT4 [OpenAI, 2023]) only expose their outputs and/or APIs without original training data because of their large model sizes and cautions toward data leakage/misuse. Figure 2 presents how PTLoss adjusts biased teachers. For visualization simplicity, we set the number of classes $C = 2$. In Figure 2a, we vary the teacher probability to show how the biased teacher model will impact the distilled student model under either the standard KL loss or our proposed PTLoss. We observe that PTLoss can guide the student’s predictions toward the ground truth and thus effectively reduces the inherent bias in the teacher’s output probabilities. In Figure 2b, we demonstrate PTLoss enables a diverse shift space to the loss curve. By setting the perturbation coefficients, PTLoss allows flexible adjustments to the loss curve. Combining with our perturbation coefficients selection methods discussed in Sec. 4.3, we can determine the perturbation to optimize the distillation process. Connections to other perturbation methods. In Appendix A.1, we establish the connections between PTLoss and other related methods that transform the teacher output probabilities, such as label smoothing [Szegedy et al., 2016], temperature scaling [Hinton et al., 2015], and focal loss [Lin et al., 2017]. We show that the loss shift space produced by PTLoss encompasses these alternative techniques and thus PTLoss can offer additional adjustment capabilities. 4 Proxy Teacher and The Principle of Selecting Polynomial Coefficients In this section, we first present a theorem to show how the teacher model affects the gap of a student model’s distillation empirical risk and its population risk (§ 4.1). Then, we demonstrate that using PTLoss implicitly transforms the original teacher model to a proxy teacher under the KL loss. Based on the above theorem, we know when this proxy teacher distribution is closer to the true distribution, we will have a better distilled student model (§ 4.2). Finally, we establish our principle of selecting the perturbation coefficients in PTLoss: searching the coefficients that lead to a proxy teacher closest to the empirical estimate of true distribution on a validation set (§ 4.3). 4.1 The Connection of the Teacher Model and the Risks of Student Model Theorem 1. Given a teacher model $p^t$, an unlabeled distillation dataset $\mathcal{D}_u$ with an unknown true distribution $p^*$, we have for any probability predictor $p : X \rightarrow \mathbb{R}^C$: $$\mathbb{E} \left[ (\tilde{R}_{KL}(p; p^t, \mathcal{D}_u) - R(p))^2 \right] \leq \frac{2}{N_u} \cdot \mathbb{V} \left[ p^t(x)^T \log(p(x)) \right] + \mathcal{O} \left( \left( \mathbb{E}_x \| p^t(x) - p^*(x) \|_2 \right)^2 + \mathbb{E}_x \left[ \left( p^t(x)^T \log p^t(x) \right)^2 \right] \right),$$ where $\mathbb{V}[\cdot]$ denotes the variance of a random variable. We defer the detailed proofs of above theorem to Appendix A.2 and focus on its implications here. We can see that the gap between a model $p$’s distillation empirical risk and its population risk depends on three terms: (1) the variance of its KL distance to the teacher model $p^t$, (2) the $L_2$ distance between the teacher model output distribution $p^t$ and the true distribution $p^*$, and (3) the entropy of the teacher distribution. In practice, obtaining a sizable unlabeled distillation set $\mathcal{D}_u$ is relatively straightforward, which leads to a large value of $N_u$. As a result, the first term (of order $O(1/N_u)$) will converge to 0 as $N_u$ keeps increasing and the latter two terms (one quantifies the distance between teacher $p^t$ and true $p^*$, and the other quantifies the teacher’s uncertainty) will dominate the risk gap. This observation also resonates with our intuition that an accurate and well-calibrated teacher yields better improved bounds on the generalization error of the student. 4.2 The Equivalence of Proxy Teacher under KL Loss and Original Teacher under PTLoss The above theorem states that an ideal teacher model, when used in KL loss for distillation, should output a distribution as close to the true distribution as possible. In reality, however, the teacher model is usually fixed. Here, we show that using PTLoss for distillation can implicitly transform the original teacher to a proxy teacher under the KL loss. Namely, given the original teacher model $p^t$ and a set of perturbation coefficients $\{\epsilon_{c,m}\}$ in PTLoss, we can obtain a proxy teacher $p^{t_{px}}$ such that: $$\tilde{R}_{KL}(p^s; p^{t_{px}}, \mathcal{D}_u) = \tilde{R}_{PT-M}(p^s; p^t, \mathcal{D}_u)$$ $$= \frac{1}{N_u} \sum_{n=1}^{N_u} \ell_{PT-M}(p^t(x_n), p^s(x_n)), \quad (8)$$ which establishes the equivalence of proxy teacher under KL loss and original teacher under PTLoss. With the proxy teacher $p^{t_{px}}$, we aim to determine the best perturbation coefficients $\{\epsilon_{c,m}\}$. Note for each $\{\epsilon_{c,m}\}$, we can obtain a proxy teacher. We illustrate how we obtain the proxy teacher in the rest of this subsection, and discuss how to select the best perturbation coefficients in §4.3. Intuitively, the proxy teacher is derived by solving the below optimization problem: $$\min_{p^{t_{px}}} \| \tilde{R}_{PT-M}(p^s; p^t, \mathcal{D}_u) - \tilde{R}_{KL}(p^s; p^{t_{px}}, \mathcal{D}_u) \|_2. \quad (9)$$ In practice, however, we do not need the above risk equivalence in Eq. 8 to hold for all possible student models $p^s$. Instead, we focus on the minimizer of the left-hand side of Eq. 8 because it is practically close to the final learned student model. By substituting this minimizer $p^s = p^{t_{px}}$ into Eq. 9, the second term in the norm of Eq. 9 becomes 0, and the first term could be expanded by its definition in Eq. 8, we thus have the following objective: $$\min_{p^{t_{px}}} \left\| \frac{1}{N_u} \sum_{n=1}^{N_u} \left( \ell_{KL}(p^t(x_n), p^{t_{px}}(x_n)) + \sum_{c \in [C]} \sum_{m=1}^{M} \epsilon_{c,m} (1 - p^{t_{px}}(x_n))^m \right) \right\|_2. \quad (10)$$ This objective enables us to solve $p^{t_{px}}$ given $p^t$ and $\{\epsilon_{c,m}\}$, where $p^t$ is the teacher’s output probability on the validation set, and $\{\epsilon_{c,m}\}$ is a given set of perturbation coefficients. However, this optimization problem is nonlinear and lacks a closed-form analytical solution. Consequently, we compute the $p^{t_{px}}$ using the numerical approach, and the details are discussed in Appendix A.3. We have also considered an alternative solution to this optimization problem, which involves defining a parameterized function $g_\theta(\cdot) : [0, 1]^C \rightarrow [0, 1]^C$ that explicitly transforms the original teacher to the proxy teacher, namely $g_\theta(p^t) = p^{t_{px}}$. We would then find the best $\theta$ minimizing the above objective (possibly via gradient-based methods). This approach leads to a smooth proxy teacher but also introduces bias from the function class defined by $\theta$. Therefore, we leave it to future work and resort to the numerical approach in this study. 4.3 Selecting Perturbation Coefficients via the Best Proxy Teacher For each candidate set of perturbation coefficients $\{\epsilon_{c,m}\}$ in PTLoss, we can find a corresponding proxy teacher and compute its risk deviation upper bound according to theorem 1. In practice, the size --- 2We use a hybrid algorithm of the Newton-Raphson method and the Levenberg-Marquardt algorithm as defined in `scipy.optimize.fsolve` [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html) Algorithm 1 Automated Perturbation Coefficients Selection Require: Validation set \( D_v \), Teacher model \( p^t \), Max perturbation order \( N_M \), Max search trails \( N_k \), Perturbation coefficient search space \( S \). Initialize \( \{ \epsilon_{c,m} \} \leftarrow \{ \}, \hat{Q}^* \leftarrow \infty \). for \( M = 1 \) to \( N_M \) do for \( k = 1 \) to \( N_k \) do Randomly sample a set of perturbation coefficients \( \{ \epsilon_{c,m} \} \) from \( S \). Solve the proxy teacher \( p^{tpx} \) given \( \{ \epsilon_{c,m} \} \) and \( p^t \) via Eq. [10]. Compute the quality score of perturbation coefficients \( \hat{Q}( \{ \epsilon_{c,m} \} ) \) via Eq. [11]. if \( \hat{Q}( \{ \epsilon_{c,m} \} ) < \hat{Q}^* \) then \( \hat{Q}^* \leftarrow \hat{Q}( \{ \epsilon_{c,m} \} ), \{ \epsilon_{c,m}^* \} = \{ \epsilon_{c,m} \} \). end if end for end for return \( \{ \epsilon_{c,m}^* \} \). of distillation set \( N_u \) is typically large and thus we can omit the \( O(1/N_u) \) variance term. Furthermore, since the ground truth distribution \( p^* \) is unknown, we use an unbiased estimator to replace it. Finally, we replace the expectation by the sample mean and define the empirical risk below: \[ \hat{Q}( \{ \epsilon_{c,m} \} ) = \left( \frac{1}{N_v} \sum_{n=1}^{N_v} \| p^{tpx}(x_n) - y_n \|_2^2 \right)^2 + \frac{1}{N_v} \sum_{n=1}^{N_v} \left( p^{tpx}(x_n)^T \log p^{tpx}(x_n) \right)^2, \] where \( N_v \) is the size of validation set and \( y_n \) is a one-hot label vector of \( x_n \), serving as the unbiased estimation of \( p^*(x_n) \). We use \( \hat{Q}( \{ \epsilon_{c,m} \} ) \) as a “quality score” for each candidate coefficients set. Users can define a search space of \( \{ \epsilon_{c,m} \} \) and we will pick the optimal \( \{ \epsilon_{c,m}^* \} \) that minimizes \( \hat{Q} \). We present the pseudo-code for selecting perturbation coefficients in Algorithm 1 and the search time for perturbation coefficients is detailed in Appendix A.4. 5 EXPERIMENTS In this section, we first conduct experiments on a synthetic dataset to verify our assumption that the teacher outputting a distribution closer to the ground truth distribution leads to a better student (§5.1). Then, we present our main results on 6 real-world NLP datasets (§5.2). Moreover, we show how the proxy teacher enhances the distillation process (§5.3). In the appendix, we further evaluate the performance of PTLoss on CIFAR-100 to show its potential in computer vision tasks (§A.8). 5.1 Experiments on Synthetic Gaussian Dataset (a) OHT - one-hot training; LS - label smoothing; GT - ground truth; KD - knowledge distillation; ESKD: early-stopped KD (Ren et al., 2022); PTLoss: here we use 3-order perturbation. (b) Correlation between the \( L_2 \)-distance (between the teacher model \( p^t \) with different levels of perturbations and the ground truth \( p^* \)) and the test accuracy of the student model. Figure 3: Experiments on a synthetic Gaussian dataset. We first conduct an illustrative experiment with a synthetic dataset where the ground truth distribution \( P^*(x) \) is known. Specifically, we follow Ren et al. (2022) to generate \( 10^5 \) examples from a mixture of Gaussian distributions and train an MLP with 3 hidden layers on this synthetic dataset\(^3\). We compare PT Loss with 4 baselines: one-hot supervision (OHT), label smoothing (LS), standard knowledge distillation (KD), and early-stopped knowledge distillation (ESKD) (see details in below §5.2). As illustrated in Fig. 3a, the quality of the distilled student improves as the \( L_2 \)-distance between the teacher distribution and the ground truth distribution decreases. On this synthetic Gaussian dataset, PT Loss also outperforms the baselines after adding a 3-order perturbation. In Fig. 3b, we sample 10 proxy teachers in different stages of the perturbation coefficient searching process (§4.3) and compare their results. It is clear that a teacher model with a smaller \( L_2 \)-distance to the ground truth distribution can lead to a better student model. This observation verifies our hypothesis in Eq. 11 — searching a proxy teacher closer to the ground truth distribution can reduce the empirical deviation and improve the distilled student model. ### 5.2 Experiments on Natural Language Datasets **Tasks and Datasets.** We conduct our main experiments on 6 natural language datasets, including (1) CoLA (Warstadt et al., 2018) for linguistic acceptability, (2) MNLI (Williams et al., 2017) for multi-genre natural language inference, (3) MRPC (Dolan and Brockett, 2005) for paraphrase similarity matching, (4) RTE (Wang et al., 2018) for textual entailment inference, (5) SST-2 (Wang et al., 2018) for sentiment analysis, and (6) BoolQ (Clark et al., 2019) for boolean question answering. We list the detailed dataset statistics in the Appendix A.6. **Model Architectures.** For the teacher model, we choose the T5 architecture (Raffel et al., 2020) and select two teacher models of different scales. Specifically, we use T5-xxl with 11 billion parameters and T5-large with 770 million parameters. For the student model, we use the BERT-base model (Devlin et al., 2018) with 110 million parameters. | Method | CoLA (Matt.) | MNLI (Acc.) | MRPC (F1) | RTE (Acc.) | SST-2 (Acc.) | BoolQ (Acc.) | Average | |-------------------------|--------------|-------------|-----------|------------|--------------|--------------|---------| | Teacher T5-xxl (#Param. = 11B) | 71.5 | 94.7 | 92.4 | 92.2 | 96.4 | 89.1 | 89.4 | | Standard KL | 58.8 | 90.3 | 88.3 | 78.1 | 89.2 | 69.5 | 79.0 | | Temp. Scaling (Hinton et al., 2015) | 59.4 | 90.7 | 88.9 | 79.6 | 89.4 | 72.0 | 80.0 | | Label Smoothing (Szegedy et al., 2016) | 59.3 | 90.6 | 89.2 | 79.2 | 89.9 | 68.6 | 79.6 | | Focal (Lin et al., 2017) | 59.2 | 90.7 | 88.7 | 80.4 | 89.3 | 68.2 | 79.6 | | Flooding (Ishida et al., 2020) | 58.9 | 90.6 | 89.6 | 80.2 | 89.3 | 69.3 | 79.7 | | CRD (Tian et al., 2019) | 59.5 | 90.5 | 90.6 | 81.1 | 89.6 | 71.8 | 80.5 | | Annealing KD (Jafari et al., 2021) | 59.8 | 90.7 | 90.0 | 80.7 | 89.3 | 70.7 | 80.2 | | FilterKD (Ren et al., 2022) | 59.2 | 90.7 | 89.5 | 80.4 | 89.3 | 69.6 | 79.8 | | MetaDistill (Zhou et al., 2022) | 60.4 | 90.8 | 91.4 | 81.3 | 89.5 | 71.9 | 80.9 | | PT Loss (ours) | 61.2 | 91.1 | 91.2 | 83.5 | 90.3 | 73.1 | 81.8 | | Method | CoLA (Matt.) | MNLI (Acc.) | MRPC (F1) | RTE (Acc.) | SST-2 (Acc.) | BoolQ (Acc.) | Average | |-------------------------|--------------|-------------|-----------|------------|--------------|--------------|---------| | Teacher T5-large (#Param. = 770M) | 61.4 | 93.6 | 92.1 | 87.2 | 95.5 | 77.9 | 84.6 | | Standard KL | 54.8 | 90.0 | 87.8 | 77.6 | 88.8 | 69.5 | 78.1 | | Temp. Scaling (Hinton et al., 2015) | 55.6 | 90.4 | 88.7 | 79.4 | 89.2 | 70.4 | 79.1 | | Label Smoothing (Szegedy et al., 2016) | 56.4 | 90.6 | 89.2 | 79.2 | 89.2 | 69.1 | 79.1 | | Focal (Lin et al., 2017) | 56.0 | 90.3 | 88.4 | 79.9 | 89.3 | 68.9 | 78.8 | | Flooding (Ishida et al., 2020) | 57.8 | 90.0 | 89.5 | 79.5 | 89.0 | 68.9 | 79.3 | | CRD (Tian et al., 2019) | 58.2 | 90.2 | 89.8 | 80.3 | 89.4 | 70.5 | 79.7 | | Annealing KD (Jafari et al., 2021) | 58.3 | 90.4 | 89.8 | 79.9 | 89.4 | 69.7 | 79.6 | | FilterKD (Ren et al., 2022) | 56.7 | 90.2 | 89.1 | 78.8 | 89.2 | 69.2 | 78.9 | | MetaDistill (Zhou et al., 2022) | 58.6 | 90.7 | 89.6 | 81.0 | 89.3 | 70.4 | 80.1 | | PT Loss (ours) | 60.5 | 90.7 | 91.1 | 82.7 | 90.0 | 71.0 | 81.0 | Table 1: Main results on natural language datasets. The student model (BERT-base) is distilled from teacher models of different sizes (T5-xxl and T5-large). All results are averaged over three runs. The bolded numbers indicate the best results, while the underscore “_” denotes the second-best results. --- \(^3\)See more details in Appendix A.5 Compared Methods. We compare PTLoss with the following baselines: (1) Standard KL loss (Kullback, 1959): adopts standard KL divergence loss for knowledge distillation; (2) Temperature scaling (Hinton et al., 2015): scales the teacher output logits via a temperature hyper-parameter; (3) Label smoothing (Szegedy et al., 2016): smooths the teacher output class probabilities by a small scalar; (4) Focal loss (Lin et al., 2017): modulates the cross-entropy loss to focus on hard examples; (5) Flooding (Ishida et al., 2020): a regularization method to intentionally prevent further reduction of the training loss; (6) CRD (Tian et al., 2019): uses a contrastive objective in knowledge distillation; (7) AnnealingKD (Jafari et al., 2021): feeds the rich information provided by the teacher’s soft-targets incrementally; (8) FilterKD (Ren et al., 2022): trains the student from the smoothed predictions of the teacher network; (9) MetaDistill (Zhou et al., 2022): evolves the teacher network with the feedback from the distilled student in a meta learning framework. For all the baselines, we conduct an exhaustive hyper-parameter search on the validation set. For our own PTLoss method, we set its perturbation order $M = 5$ and use the proxy teacher-based method to search its perturbation coefficients (§4.3). See Appendix A.7 for more details. We run each method with three different random seeds and report its average performance. Main Results. Table 1 shows the main results on 6 NLP datasets. We have the following observations: (1) PTLoss outperforms on 11 out of the total 12 tasks, achieving the best average performance across the board. The only exception is MetaDistill, which tops the results on the MRPC when using the T5-xxl teacher and ties with PTLoss on MNLI when using the T5-large teacher. (2) The advantages offered by PTLoss are robust, regardless of the scale of the teacher model. Notably, as the disparity in scale between the teacher model and student model reduces, the performance gap between them also narrows. (3) In comparison to vanilla KD, which utilizes the standard KL, PTLoss showcases significant enhancement. Specifically, it exceeds standard KL by an average of 2.8% and 2.9%, respectively. (4) Surveying the baseline methods, MetaDistill stands out, securing the second-highest performance across most tasks. On the whole, the cluster of KD methods generally outstrips the simple regularization methods. 5.3 Proxy Teacher Analysis (a) Correlation between the validation TVD of the teacher model and the test accuracy of the student model. Experiments are conducted on BoolQ. (b) The Proxy Teacher method v.s. random search for the perturbation coefficient selection. We conduct experiments on MNLI with a T5-xl teacher. Figure 4: PTLoss analysis. Correlation between teacher’s distance to ground truth and student’s performance. To explore where PTLoss’s performance gains come from, we train multiple teacher models on the BoolQ dataset and distill them into the student models. Fig. 4a shows that the student model performance on the test set is highly correlated with the total variance distance (TVD) (i.e., $l_\infty$ distance) between the teacher model’s output distribution and the ground truth distribution on the validation set. This result resonates with our findings from synthetic datasets (§5.1), confirming that, on real-world datasets, a teacher model with a predictive distribution closer to the ground truth can produce a more effectively distilled student. Effectiveness of Perturbation Coefficients Search. We continue to validate the effectiveness of the proxy teacher-based perturbation coefficients selection method using MNLI as a representative dataset. Specifically, we vary the perturbation order $M$ from 1 to 5 and report the performance of the student models distilled via PTLoss with different perturbation coefficients. These coefficients are obtained either by minimizing the empirical risk deviation of proxy teacher (c.f. Eq. (10) or via random sampling from the space of $[-1, 10]^M$. As shown in Fig. 4b, the coefficients obtained from our proxy teacher based method can achieve consistent improvements over the random coefficients. If we just randomly set the perturbation coefficients, the student performance can drop by up to 1.2%. Also, by comparing different perturbation orders, we find that the higher the perturbation order, the greater the performance differences. This is because in the higher-dimension space, it is harder for random search to get a set of appropriate perturbation coefficients, which makes the random PTLoss even worse than the standard KL loss. Conversely, equipped with the perturbation coefficients obtained via proxy teacher, PTLoss can significantly outperform the underlying KL loss. 6 RELATED WORK Knowledge Distillation. Knowledge distillation was first proposed in (Buciluă et al., 2006) to compress the large models to smaller, faster models without a significant performance drop. Hinton et al. (2015) generalized this technique by introducing a temperature parameter to smooth the teacher model prediction and Tian et al. (2019) employed contrastive learning to train the student model. Later, Yuan et al. (2020) explored the connection between KD and label smoothing, Jafari et al. (2021) and Chen et al. (2021) studied the feeding mechanism of the teacher’s knowledge. Zhao et al. (2022) decoupled the classical loss to target classes and non-target classes for KD efficiency and flexibility. Ren et al. (2022) investigated supervisory signals and proposed to average teacher outputs for KD stability, while Zhou et al. (2022) evolves the teacher model with the student feedback in a meta learning framework. Distillation Theory. Concurrent with the empirical success of knowledge distillation, numerous works aim to understand its mechanisms. Hinton et al. (2015) suggest that teacher’s soft labels offer “dark knowledge” through weights on incorrect labels. Menon et al. (2021) present a statistical view, observing that a good teacher model should be Bayesian to reduce the student objective variance. Stanton et al. (2021) highlight discrepancies between teacher and student output distributions and emphasize the optimization challenge in distillation. While more recent studies (Ji and Zhu, 2020; Zhou et al., 2021; Hsu et al., 2021; Allen-Zhu and Li, 2023) explore distillation from several various angles, a gap remains between the theoretical analysis and the improved distillation techniques. Loss Function Design. Our work also relates to loss function design and learning. Lin et al. (2017) propose reshaping the cross-entropy loss to concentrate on hard examples and address the data imbalance issue. Leng et al. (2022) expand cross-entropy loss and focal loss into a linear combination of polynomial functions, primarily studying Poly-1 formulation on computer vision tasks while avoiding issues with high-order polynomial hyper-parameter searches. TaylorGLO (Gonzalez and Mikkulainen, 2021) utilizes Covariance Matrix Adaptation Evolution Strategy (CMA-ES) to optimize multivariate Taylor parameterization of a loss function and learning rate schedule, but lacks principled analysis on performance gains after perturbation. In contrast, we theoretically and empirically prove the necessity of adding perturbations to the KD learning objective when using a high-fidelity teacher for quality student supervision. 7 CONCLUSIONS AND FUTURE WORK In this study, we proposed a novel knowledge distillation loss PTLoss which implicitly shifts the teacher model output distribution to a high-fidelity one for student model training. We also established connections between PTLoss and other loss functions by demonstrating that PTLoss can subsume the others while providing more flexible adjustments to teacher models. We theoretically showed how the teacher model affects the student model risks and presented a principled method to systematically search perturbation coefficients. Extensive experiments on multiple tasks verified our proposed theory and validated the effectiveness of distillation via PTLoss. While PTLoss enables better KD by creating a proxy teacher closer to the ground truth distribution, we focus on the single-teacher-single-student setting in this work. It is worth exploring how this approach can be extended to ensemble KD involving multiple teachers or students. Additionally, although the proposed coefficients selection method provides a principal way to determine the perturbation hyperparameters, it remains challenging to scale up the number of classes and the perturbation order. Future work could benefit from developing scalable methods for hyperparameter search, enabling rapid determination of perturbation coefficients even in high-dimensional spaces with numerous classes or high perturbation orders. REFERENCES Zeyuan Allen-Zhu and Yuanzhi Li. Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. In *ICLR*, 2023. Stéphane Boucheron, Olivier Bousquet, and Gábor Lugosi. Theory of classification: A survey of some recent advances. *ESAIM: probability and statistics*, 9:323–375, 2005. Cristian Buciluă, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In *Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining*, pages 535–541, 2006. Pengguang Chen, Shu Liu, Hengshuang Zhao, and Jiaya Jia. Distilling knowledge via knowledge review. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 5008–5017, 2021. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. *arXiv preprint arXiv:1905.10044*, 2019. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. William B. Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In *Proceedings of the Third International Workshop on Paraphrasing (IWP2005)*, 2005. URL https://aclanthology.org/I05-5002. Santiago Gonzalez and Risto Miikkulainen. Optimizing loss functions through multi-variate taylor polynomial parameterization. In *Proceedings of the Genetic and Evolutionary Computation Conference*, pages 305–313, 2021. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the knowledge in a neural network. *arXiv preprint arXiv:1503.02531*, 2(7), 2015. Daniel Hsu, Ziwei Ji, Matus Telgarsky, and Lan Wang. Generalization bounds via distillation. *arXiv preprint arXiv:2104.05641*, 2021. Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, and Masashi Sugiyama. Do we need zero training loss after achieving zero training error? *arXiv preprint arXiv:2002.08709*, 2020. Aref Jafari, Mehdi Rezagholizadeh, Pranav Sharma, and Ali Ghodsi. Annealing knowledge distillation. In *Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume*, pages 2493–2504, 2021. Guangda Ji and Zhanxing Zhu. Knowledge distillation in wide neural networks: Risk bound, data efficiency and imperfect teacher. *Advances in Neural Information Processing Systems*, 33: 20823–20833, 2020. Solomon Kullback. Statistics and information theory, 1959. Zhaoqi Leng, Mingxing Tan, Chenxi Liu, Ekin Dogus Cubuk, Jay Shi, Shuyang Cheng, and Dragomir Anguelov. Polyloss: A polynomial expansion perspective of classification loss functions. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=gSdSJoenupT. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In *Proceedings of the IEEE international conference on computer vision*, pages 2980–2988, 2017. Boxiao Liu, Shenghan Zhang, Guanglu Song, Haihang You, and Yu Liu. Rectifying the data bias in knowledge distillation. In *2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)*, pages 1477–1486, 2021. doi: 10.1109/ICCVW54120.2021.00171. Michal Lukasik, Srinadh Bhojanapalli, Aditya Krishna Menon, and Sanjiv Kumar. Teacher’s pet: understanding and mitigating biases in distillation. *arXiv preprint arXiv:2106.10494*, 2021.
4Ua4hKiAJX
The paper mentions that they tune $L$ and $\rho$ for their method. However, they do not perform any hyper parameter tuning for the comparison methods (SDRF and FOSR). They fix the number of edges to be 40. This is another thing that could account for the inequity between the methods. It is also not mentioned what number is reported, I am assuming that the experiments trained models for each of the hyperparameters, picked the setting with the best validation performance and then reported the test error, however it would good if this were explicitly mentioned (since Figure 3 reports the metrics on the test data for all hyperparameters).
LOCALITY-AWARE GRAPH REWIRING IN GNNs Federico Barbero\textsuperscript{1,*}, Ameya Velingker\textsuperscript{2}, Amin Saberi\textsuperscript{3}, Michael Bronstein\textsuperscript{1}, Francesco Di Giovanni\textsuperscript{1} \textsuperscript{1}University of Oxford, Department of Computer Science \textsuperscript{2}Google Research \textsuperscript{3}Stanford University, Department of Management Science and Engineering ABSTRACT Graph Neural Networks (GNNs) are popular models for machine learning on graphs that typically follow the message-passing paradigm, whereby the feature of a node is updated recursively upon aggregating information over its neighbors. While exchanging messages over the input graph endows GNNs with a strong inductive bias, it can also make GNNs susceptible to \textit{over-squashing}, thereby preventing them from capturing long-range interactions in the given graph. To rectify this issue, \textit{graph rewiring} techniques have been proposed as a means of improving information flow by altering the graph connectivity. In this work, we identify three desiderata for graph-rewiring: (i) reduce over-squashing, (ii) respect the locality of the graph, and (iii) preserve the sparsity of the graph. We highlight fundamental trade-offs that occur between \textit{spatial} and \textit{spectral} rewiring techniques; while the former often satisfy (i) and (ii) but not (iii), the latter generally satisfy (i) and (iii) at the expense of (ii). We propose a novel rewiring framework that satisfies all of (i)–(iii) through a locality-aware sequence of rewiring operations. We then discuss a specific instance of such rewiring framework and validate its effectiveness on several real-world benchmarks, showing that it either matches or significantly outperforms existing rewiring approaches. 1 INTRODUCTION Graph Neural Networks (GNNs) (Sperduti, 1993; Goller & Kuchler, 1996; Gori et al., 2005; Scarselli et al., 2008; Bruna et al., 2014; Defferrard et al., 2016) are widely popular types of neural networks operating over graphs. The majority of GNN architectures act by locally propagating information across adjacent nodes of the graph and are referred to as Message Passing Neural Networks (MPNNs) (Gilmer et al., 2017). Since MPNNs aggregate messages over the neighbors of each node recursively at each layer, a sufficient number of layers is required for distant nodes to interact through message passing (Barceló et al., 2019). In general, this could lead to an explosion of information that needs to be summarized into fixed-size vectors, when the receptive field of a node grows too quickly due to the underlying graph topology. This phenomenon is known as \textit{over-squashing} (Alon & Yahav, 2021), and it has been proved to be heavily related to topological properties of the input graph such as curvature (Topping et al., 2022) and effective resistance (Black et al., 2023; Di Giovanni et al., 2023). Since over-squashing is a limitation of the message-passing paradigm that originates in the topology of the input-graph, a solution to these problems is \textit{graph rewiring} (Topping et al., 2022), in which one alters the connectivity of the graph to favor the propagation of information among poorly connected nodes. \textit{Spatial rewiring} techniques often connect each node to any other node in its $k$-hop (Brüel-Gabrielsson et al., 2022; Abboud et al., 2022), or in the extreme case operate over a fully-connected graph weighted by attention – such as for Graph-Transformers (Kreuzer et al., 2021; Mialon et al., 2021; Ying et al., 2021; Rampasek et al., 2022). \textit{Spectral rewiring} techniques instead aim to improve the connectivity of the graph by optimizing for graph-theoretic quantities related to its expansion properties such as the spectral gap, commute time, or effective resistance (Arnaiz-Rodríguez et al., 2022; Karhadkar et al., 2022; Black et al., 2023). While graph rewiring is a promising direction, it also introduces a fundamental trade-off between the preservation of the original topology and the ‘friendliness’ of the graph to message passing. Spatial rewiring techniques partly preserve the graph-distance information (i.e. its ‘locality’) by *Correspondence to federico.barbero@cs.ox.ac.uk. Figure 1: Difference between spectral (left), spatial (middle), and LASER (right) rewirings in green with respect to the blue node of reference. Spectral rewirings are sparse and connect distant nodes. Spatial rewirings are able to retain local inductive biases at the cost of sparsity. LASER remains both local and sparse by optimizing over the edges to be added. only adding edges within a certain radius or by relying on positional information. However, these methods often result in a dense computational graph that increases memory complexity and can cause issues such as over-smoothing (Ni & Maehara, 2019; Oono & Suzuki, 2020; Rusch & Mishra, 2020; Di Giovanni et al., 2022). Conversely, spectral rewiring approaches add fewer edges according to some optimization criterion and hence better preserve the sparsity of the input graph. However, these methods ‘maximally’ destroy the locality induced by the graph since they typically insert very ‘long’ edges among distant nodes (see Figure 1). The following natural question then arises: Can we design a general graph rewiring framework that leverages the inductive bias of spatial methods but in a more edge-efficient way characteristic of spectral methods? Contributions and outline. In this work, we address the above question by proposing a general framework for graph-rewiring that improves the connectivity, while preserving locality and sparsity: • In Section 3 we review existing rewiring approaches and classify them as either spatial or spectral, highlighting their limitations. We then provide a general list of desiderata for rewiring that amounts to (i) reducing over-squashing, and preserving both (ii) the graph-locality and (iii) its sparsity. • In Section 4 we introduce a paradigm for rewiring that depends on arbitrary connectivity and locality measures. We argue that in order to satisfy (i)–(iii) above, a single rewiring is not enough, and instead propose sequential rewiring, where multiple graph snapshots are considered. Building on Karhadkar et al. (2022), we also draw an important equivalence between graph-rewiring on one side, and multi-relational GNNs and temporal-GNNs on the other. • In Section 5 we present a specific instance of the aforementioned paradigm termed Locality-Aware SEquential Rewiring (LASER). Our framework leverages the distance similarly to spatial rewiring while also guaranteeing the efficiency of spectral techniques by sampling edges to add according to equivariant, optimal conditions. We show that LASER reduces over-squashing and better preserves the locality of the graph compared to spectral rewiring techniques. • In Section 6 we validate LASER on different tasks, attaining performance that is on par or superior to existing rewiring techniques. In particular, we present extensive ablation studies to support our claim that LASER is more efficient than spatial methods while being better at preserving graph-distance information in comparison to spectral approaches. 2 BACKGROUND Preliminaries on graphs. Let $G = (V, E)$ be an undirected graph with $n$ nodes $V$ and edges $E$, which are encoded by the non-zero entries of the adjacency matrix $A \in \mathbb{R}^{n \times n}$. Let $D$ be the diagonal degree matrix such that $D_{uv} = d_u$. We recall that the normalized graph Laplacian $\Delta = D^{-1/2}(D - A)D^{-1/2}$ is a symmetric positive semi-definite operator with eigenvalues $0 = \lambda_0 \leq \lambda_1 \leq \cdots \leq \lambda_{n-1}$. We assume that $G$ is connected, so that $\lambda_1 > 0$ and refer to it as the spectral gap. From the Cheeger inequality, it follows that a larger $\lambda_1$ generally means better connectivity of $G$. We denote by $d_G(u, v)$ the shortest-path distance between the nodes $u, v$. We finally recall that a random walk on $G$ is a Markov chain on $V$ with transition matrix $D^{-1}A$ and that the commute time $CT$ is defined as the expected number of steps required for a random walk to commute between two nodes. Note that the commute time \( \text{CT}(v, u) \) between two nodes \( v \) and \( u \) is proportional to their effective resistance \( R(v, u) \) (Chandra et al., 1996) as \( \text{CT}(v, u) = 2|E|R(v, u) \). The message-passing paradigm. We consider the case where each node \( v \) has a feature \( x_v^{(0)} \in \mathbb{R}^d \). It is common to stack the node features into a matrix \( X^{(0)} \in \mathbb{R}^{n \times d} \) consistently with the ordering of \( A \). GNNs are functions defined on the featured graph that can output node, edge, or graph-level values. The most common family of GNN architectures are Message Passing Neural Networks (MPNN), which compute latent node representations by stacking \( T \) layers of the form: \[ x_v^{(t)} = \text{up}^{(t)}(x_v^{(t-1)}, a^{(t)}(\{x_u^{(t-1)} : (v, u) \in E\})), \] for \( t = 1, \ldots, T \), where \( a^{(t)} \) is some permutation-invariant aggregation function, while \( \text{up}^{(t)} \) updates the node’s current state with aggregated messages from its neighbors. Over-squashing and long-range interactions. While the message-passing paradigm usually constitutes a strong inductive bias, it is problematic for capturing long-range interactions due to a phenomenon known as over-squashing. Given two nodes \( u, v \) at distance \( d_G(u, v) = r \), an MPNN will require \( T \geq r \) layers to exchange messages between them. When the receptive fields of the nodes expand too quickly (due to volume growth properties characteristic of many real-world scale free graphs), the MPNN needs to aggregate a large number of messages into fixed-size vectors, leading to some corruption of the information (Alon & Yahav, 2021). This effect on the propagation of information has been related to the Jacobian of node features decaying exponentially with \( r \) (Topping et al., 2022). More recently, it was shown that the Jacobian is affected by topological properties such as effective resistance (Black et al., 2023; Di Giovanni et al., 2023). 3 EXISTING GRAPH-REWIRING APPROACHES AND THEIR LIMITATIONS The main principle behind graph rewiring in GNNs is to decouple the input graph \( G \) from the computational one. Namely, rewiring consists of applying an operation \( R \) to \( G = (V, E) \), thereby producing a new graph \( R(G) = (V, R(E)) \) on the same vertices but with altered connectivity. We begin by generalizing the MPNN formalism to account for the rewiring operation \( R \) as follows: \[ x_v^{(t)} = \text{up}^{(t)}(x_v^{(t-1)}, a^{(t)}_G(\{x_u^{(t-1)} : (v, u) \in E\}), a^{(t)}_{R(G)}(\{x_u^{(t-1)} : (v, u) \in R(E)\})), \] where a node feature is now updated based on information collected over the input graph \( G \) and the rewired one \( R(G) \), through (potentially) independent aggregation maps. Many rewiring-based GNN models simply exchange messages over \( R(G) \), i.e., they take \( a_G = 0 \). The idea of rewiring the graph is implicit to many GNNs, from using Cayley graphs (Deac et al., 2022), to virtual nodes (Cai et al., 2023) and cellular complexes (Bodnar et al., 2021). Other works have studied the implications of directly changing the connectivity of the graph to de-noise it (Klicpera et al., 2019), or to explore multi-hop aggregations (Abu-El-Haija et al., 2019; Ma et al., 2020; Wang et al., 2020; Nikolentzos et al., 2020). Ever since over-squashing was identified as an issue in MPNNs (Alon & Yahav, 2021), several novel rewiring approaches have been proposed to mitigate this phenomenon. Related work on spatial rewiring. Most spatial rewiring models attempt to alleviate over-squashing by adding direct connections between a node and every other node within a certain distance (Brüel-Gabrielsson et al., 2022; Abboud et al., 2022) — with (dense) Graph Transformers being the extreme case (Ying et al., 2021; Mialon et al., 2021; Kreuzer et al., 2021; Rampasek et al., 2022). These frameworks follow equation 2, where \( a_G \) and \( a_{R(G)} \) are learned independently, or the former is zero while the second implements attention over a dense graph. Spatial rewiring reduces over-squashing by creating new paths in the graph, thus decreasing its diameter or pairwise effective resistances between nodes. The rewired graph still preserves some information afforded by the original topology in the form of distance-aware aggregations in multi-hop GNNs, or positional encoding in Graph-Transformers. A drawback of this approach, however, is that we end up compromising the sparsity of the graph, thereby impacting efficiency. Thus, a natural question is whether some of these new connections introduced by spatial rewiring methods may be removed without affecting the improved connectivity. We also mention spatial rewiring methods based on improving the curvature of \( G \) by only adding edges among nodes at distance at most two (Topping et al., 2022; Nguyen et al., 2022). Accordingly, these models may fail to significantly improve the effective resistance of the graph unless a large number of local edges is added. **Related work on spectral rewiring methods.** A different class of approaches consist of rewiring the graph based on a global spectral quantity rather than using spatial distance. Two prototypical measures that have been explored in this regard are spectral gap (Karhadkar et al., 2022) and effective resistance (Arnaiz-Rodríguez et al., 2022; Banerjee et al., 2022; Black et al., 2023). It has recently been shown that a node \( v \) is mostly insensitive to information contained at nodes that have high effective resistance (Black et al., 2023; Di Giovanni et al., 2023); accordingly, spectral rewiring approaches alleviate over-squashing by reducing the effective resistance. Moreover, they achieve that adding only a few edges by optimally increasing the chosen measure of connectivity, hence maintaining the sparsity level of the input graph. However, the edges that are added in the graph typically end up connecting very distant nodes (since the distance between two nodes is at least as large as their effective resistance), hence rapidly diminishing the role of locality provided by distance on the original graph. **An ideal rewiring approach.** Given a graph \( G \), an ideal rewiring map \( R \) should satisfy the following desiderata: (i) **Reduce over-squashing:** \( R \) increases the overall connectivity of \( G \)—according to some topological measure—in order to alleviate over-squashing; (ii) **Preserve locality:** \( R \) preserves some inductive bias afforded by \( G \), e.g., nodes that are “distant” should be kept separate from nodes that are closer in the GNN architecture; (iii) **Preserve sparsity:** \( R \) approximately preserves the sparsity of \( G \), ideally adding a number of edges linear in the number of nodes. While condition (i) represents the main rationale for rewiring the input graph, criteria (ii) and (iii) guarantee that the rewiring is efficient and do not allow the role played by the structural information in the input graph to degrade too much. As discussed above and summarized in Table 1, spatial methods typically satisfy only (i) and (ii), but not (iii), while spectral-methods meet (i) and (iii) but fail (ii). **Main idea.** Our main contribution is a novel paradigm for graph rewiring that satisfies criteria (i)–(iii), leveraging a key principle: instead of considering a single rewired graph \( R(G) \), we use a sequence of rewired graphs \( \{R_\ell(G)\}_\ell \) such that for smaller \( \ell \), the new edges added in \( R_\ell(G) \) are more ‘local’ (with respect to the input graph \( G \)) and sampled based on optimizing a connectivity measure. ### 4 A GENERAL PARADIGM: DYNAMIC REWIRING WITH LOCAL CONSTRAINTS In this Section, we discuss a general graph-rewiring paradigm that can enhance any MPNN and satisfies the criteria (i)–(iii) described above. Given a graph \( G \), consider a trajectory of rewiring operations \( R_\ell \), starting at \( G_0 = G \), of the form: \[ G = G_0 \xrightarrow{R_1} G_1 \xrightarrow{R_2} \cdots \xrightarrow{R_L} G_L. \] Since we think of \( G_\ell \) as the input graph evolved along a dynamical process for \( \ell \) iterations, we refer to \( G_\ell \) as the \( \ell \)-snapshot. For the sake of simplicity, we assume \( R_\ell = R \), though it is straightforward to extend the discussion below to the more general case. In order to account for the multiple snapshots, we modify the layer form in equation 2 as follows: \[ x_v^{(t)} = \text{up}(t)\left(x_v^{(t-1)}, \left(a_{\mu_\ell}\left(\{x_u^{(t-1)} : (v, u) \in E_\ell\}\right)\right)_{0 \leq \ell \leq L}\right). \] Below we describe a rewiring paradigm based on an arbitrary connectivity measure \( \mu : V \times V \to \mathbb{R} \) and locality measure \( \nu : V \times V \to \mathbb{R} \). The measure \( \mu \) can be any topological quantity that captures how easily different pairs of nodes can communicate in a graph, while the measure \( \nu \) is any quantity that penalizes interactions among nodes that are ‘distant’ according to some metric on the input graph. In a nutshell, our choice of \( R \) samples edges to add according to the constraint \( \nu \), prioritizing those that maximally benefit the measure \( \mu \). By keeping this generality, we provide a universal approach to do graph-rewiring that can be of interest independently of the specific choices of \( \mu \) and \( \nu \). | Property | Spatial | Spectral | LASER | |---------------------------|---------|----------|-------| | Reduce over-squashing | ✓ | ✓ | ✓ | | Preserve locality | ✓ | ✗ | ✓ | | Preserve sparsity | ✗ | ✓ | ✓ | Improving connectivity while preserving locality. The first property we demand of the rewiring sequence is that for all nodes \( v, u \), we have \( \mu_{G_{\ell+1}}(v,u) \geq \mu_{G_\ell}(v,u) \) and that for some nodes, the inequality is strict. If we connect all pairs of nodes with low \( \mu \)-value, however, we might end up adding non-local edges across distant nodes, hence quickly corrupting the locality of \( G \). To avoid this, we constrain each rewiring by requiring the measure \( \nu \) to take values in a certain range \( I_\ell \subset [0, \infty) \): an edge \((v,u)\) appears in the \( \ell \)-snapshot (for \( 1 \leq \ell \leq L \)) according to the following rule: \[ (v,u) \in E_\ell \text{ if } (\mu_{G_0}(v,u) < \epsilon \text{ and } \nu_{G_0}(v,u) \in I_\ell) \text{ or } (v,u) \in E_{\ell-1}. \] To make the rewiring more efficient, the connectivity and locality measures are computed once over the input graph \( G_0 \). Since the edges to be added connect nodes with low \( \mu \)-values, the rewiring makes the graphs \( G_\ell \) friendlier to message-passing as \( \ell \) grows. Moreover, by taking increasing ranges of values for the intervals \( I_\ell \), we make sure that new edges connect distant nodes, as specified by \( \nu \), only at later snapshots. Sequential rewiring allows us to interpolate between the given graph and one with better connectivity, creating intermediate snapshots that progressively add non-local edges. By accounting for all the snapshots \( G_\ell \) in equation 2, the GNN can access both the input graph, and more connected ones, at a much finer level than ‘instantaneous’ rewirings, defined next. Instantaneous vs sequential rewiring. As discussed in Section 3, existing rewiring techniques — particularly those of the spectral type — often consider the simpler trajectory \( G_0 \rightarrow R(G_0) := G_1 \) (“instantaneous rewiring”). The main drawback of this approach is that in order to improve the connectivity in a single snapshot, the rewiring map \( R \) is bound to either violate the locality constraint \( \nu \), by adding edges between very distant nodes, or compromise the graph-sparsity by adding a large volume of (local) edges. In fact, if that were not the case, we would still be severely affected by over-squashing. Conversely, sequential rewiring allows a smoother evolution from the input graph \( G_0 \) to a configuration \( G_L \) which is more robust to over-squashing, so that we can more easily preserve the inductive bias afforded by the topology via local constraints under equation 5. An equivalent perspective: multi-relational GNNs. In Karhadkar et al. (2022) the notion of relational rewiring was introduced for spectral methods. We expand upon this idea, by noticing that the general, sequential rewiring paradigm described above can be instantiated as a family of multi-relational GNNs (Battaglia et al., 2018; Barcelo et al., 2022). To this aim, consider a slightly more specific instance of equation 4, which extends common MPNN frameworks: \[ x_v^{(t)} = \text{up}^{(t)} \left( x_v^{(t-1)}, \sum_{\ell=0}^{L} \sum_{(v,u) \in E_\ell} \psi_\ell^{(t)}(x_v^{(t-1)}, x_u^{(t-1)}) \right), \] where \( \psi_\ell^{(t)} \) are learnable message functions depending on both the layer \( t \) and the snapshot \( \ell \). It suffices now to note that each edge set \( E_\ell \), originated from the rewiring sequence, can be given its own relation, so that equation 6 is indeed equivalent to the multi-relation GNN framework of Battaglia et al. (2018). In fact, since we consider rewiring operations that only add edges to improve the connectivity, we can rearrange the terms and rename the update and message-function maps, so that we aggregate over existing edges once, and separately over the newly added edges i.e. the set \( E_\ell \setminus E_{\ell-1} \). Namely, we can rewrite equation 6 as \[ x_v^{(t)} = \text{up}^{(t)} \left( x_v^{(t-1)}, \sum_{u : (v,u) \in E} \psi_0^{(t)}(x_v^{(t-1)}, x_u^{(t-1)}) + \sum_{\ell=1}^{L} \sum_{(v,u) \in E_\ell \setminus E_{\ell-1}} \psi_\ell^{(t)}(x_v^{(t-1)}, x_u^{(t-1)}) \right). \] Accordingly, we see how our choice of sequential rewiring can be interpreted as an extension of relational rewiring in Karhadkar et al. (2022), where \( L = 1 \). Differently from Karhadkar et al. (2022), the multiple relations \( \ell \geq 1 \) allow us to add connections over the graph among increasingly less local nodes, meaning that the edge-type \( \ell \) is now associated to a notion of locality specified by the choice of the constraint \( \nu(v,u) \in I_\ell \). We finally observe that the connection between graph-rewiring and relational GNNs is not surprising once we think of the sequence of rewiring in equation 3 as snapshots of a temporal dynamics over the graph connectivity. Differently from the setting of temporal GNNs (Rossi et al., 2020) though, here the evolution of the connectivity over time is guided by our rewiring procedure rather than by an intrinsic law on the data. In fact, Gao & Ribeiro (2022) studied the equivalence between temporal GNNs and static multi-relational GNNs, which further motivate the analogy discussed above. 5 LOCALITY-AWARE SEQUENTIAL REWIRING: THE LASER FRAMEWORK We consider an instance of the outlined sequential rewiring paradigm, giving rise to the LASER framework used in our experiments. We show that LASER (i) mitigates over-squashing, (ii) preserves the inductive bias provided by the shortest-walk distance on $G$ better than spectral approaches, while (iii) being sparser than spatial-rewiring methods. The choice of locality. We choose $\nu$ to be the shortest-walk distance $d_G$. In particular, if in equation 5 we choose intervals $I_\ell = \delta_{\ell+1}$, then at the $\ell$-snapshot $G_\ell$ we only add edges among nodes at distance exactly $\ell + 1$. Our constraints prevent distant nodes from interacting at earlier snapshots and allows the GNN to learn message functions $\psi_\ell$ in equation 7 for each hop level $\ell$. If we choose $E_\ell \setminus E_{\ell-1}$ to be the set of all edges connecting nodes whose distance is exactly $\ell + 1$, then equation 7 is equivalent to the $L$-hop MPNN class studied in Feng et al. (2022). This way though, we generally lose the sparsity of $G$ and increase the risk of over-smoothing. Accordingly, we propose to only add edges that satisfy the locality constraint and have connectivity measure ‘small’ so that their addition is optimal for reducing over-squashing. The choice of the connectivity measure $\mu$. Although edge curvature or effective resistance $R$ are related to over-squashing (Topping et al., 2022; Black et al., 2023; Di Giovanni et al., 2023), computing these metrics incur high complexity – $O(|E|d_{max}^2)$ for the curvature and $O(n^3)$ for $R$. Because of that, we propose a more efficient connectivity measure: $$\mu_k(v,u) := (\tilde{A}^k)_{vu}, \quad \tilde{A} := A + I.$$ Because of the self-loops, the entry $(\tilde{A}^k)_{vu}$ equals the number of walks from $v$ to $u$ of length at most $k$. Once we fix a value $k$, if $\mu_k(v,u)$ is large, then the two nodes $v,u$ have multiple alternative routes to exchange information (up to scale $k$) and would usually have small effective resistance. In particular, according to Di Giovanni et al. (2023, Theorem 4.1), we know that the number of walks among two nodes is a proxy for how sensitive a pair of nodes is to over-squashing. LASER focus. We can now describe our framework. Given a node $v$ and a snapshot $G_\ell$, we consider the set of nodes at distance exactly $\ell + 1$ from $v$, which we denote by $N_{\ell+1}(v)$. We introduce a global parameter $\rho \in (0, 1]$ and add edges (with relation type $\ell$ as per equation 7) among $v$ and the fraction $\rho$ of nodes in $N_{\ell+1}(v)$ with the lowest connectivity score – if this fraction is smaller than one, then we round it to one. This way, we end up adding only a percentage $\rho$ of the edges that a normal multi-hop GNNs would have, but we do so by prioritizing those edges that improve the connectivity measure the most. To simplify the notations, we let $N_{\ell+1}^\rho(v) \subset N_{\ell+1}(v)$, be the $\rho$-fraction of nodes at distance $\ell + 1$ from $v$, where $\mu_k$ in equation 8 takes on the lowest values. We express the layer-update of LASER as $$x_v^{(t)} = \text{up}^{(t)} \left( x_v^{(t-1)}, \sum_{u: (v,u) \in E} \psi_0(x_v^{(t-1)}, x_u^{(t-1)}) + \sum_{\ell=1}^L \sum_{u \in N_{\ell+1}^\rho(v)} \psi_\ell(x_v^{(t-1)}, x_u^{(t-1)}) \right).$$ We note that when $\rho = 0$, equation (9) reduces to a standard MPNN on the input graph, while for $\rho = 1$ we recover multi-relational $L$-hop MPNNs (Feng et al., 2022). Although the framework encompasses different choices of the message-functions $\psi_\ell$, in the following we focus on the LASER-GCN variant, whose update equation is reported in Appendix (Section A). We now show that the LASER framework satisfies the criteria (i)–(iii) introduced in Section 3. Let $J^{(r)}(v,u) := \partial x_v^{(r)} / \partial x_u^{(0)}$ be the Jacobian of features after $r$ layers of GCN on $G$, and similarly we let $\hat{J}^{(r)}(v,u)$ be the Jacobian of features after $r$ layers of LASER-GCN in equation 10. In the following, we take the expectation with respect to the Bernoulli variable ReLU' which is assumed to have probability of success $\rho$ for all paths in the computational graph as in Xu et al. (2018); Di Giovanni et al. (2023). We recall that given $i \in V$ and $1 \leq \ell \leq L$, $d_{i,\ell}$ enters equation 10. Proposition 5.1. Let $v,u \in V$ with $d_G(v,u) = r$, and assume that there exists a single path of length $r$ connecting $v$ and $u$. Assume that LASER adds an edge between $v$ and some node $j$ belonging to the path of length $r$ connecting $v$ to $u$, with $d_G(v,j) = \ell < r$. Then for all $m \leq r$, we have $$||\mathbb{E}[\hat{J}^{(r-\ell+1)}(v,u)]|| \geq \frac{(d_{min})^\ell}{\sqrt{d_{v,\ell-1}d_{j,\ell-1}}} ||\mathbb{E}[J^{(m)}(v,u)]||.$$ The result is not surprising and shows that in general, the LASER-rewiring can improve the Jacobian sensitivity significantly and hence alleviates over-squashing, satisfying desideratum (i). Next, we validate that the effects of the local constraints when compared to unconstrained, global spectral methods. Below, we let \( D_G \) be the matrix of pairwise distances associated with the graph \( G \), i.e. \((D_G)_{vu} = d_G(v, u)\). We propose to investigate \( \|D_G - D_{R(G)}\|_F \), where \( \| \cdot \|_F \) is the Frobenius norm and \( R(G) \) is either a baseline spectral rewiring, or our LASER-framework. We treat this quantity as a proxy for how well a rewiring framework is able to preserve the inductive bias given by the input graph. In fact, for many graphs (including molecular-type with small average degree), spectral rewirings incur a larger Frobenius deviation even if they add fewer edges, since these edges typically connect very distant nodes in the graph. To this aim, we show a setting where LASER preserves more of the locality inductive bias than spectral-based methods provided we choose the factor \( \rho \) small enough. Below, we focus on a case that, according to Di Giovanni et al. (2023); Black et al. (2023), we know to be a worst-case scenario for over-squashing considering that the commute time scales cubically in the number of nodes. Put differently, the graph below represents a prototypical case of ‘bottleneck’ encountered when information has to travel from the end of the chain to the clique. **Proposition 5.2.** Let \( G \) be a ‘lollipop’ graph composed of a chain of length \( L \) attached to a clique of size \( n \) sufficiently large. Consider a spectral rewiring \( R \) which adds an edge between nodes with the highest effective resistance. We can choose the factor \( \rho \in (0, 1) \) as a function of \( L \) so that LASER with a single snapshot, on average, adds a number of edges that guarantees: \[ \|D_G - D_{R(G)}\|_F \geq \|D_G - D_{LASER}\|_F. \] We refer to the Appendix (Section A) for an explicit characterization on how large \( n \) needs to be depending on \( L \) and the proofs of the statements above. Finally, as desired in (iii), we observe that compared to dense multi-hop GNNs, LASER is more efficient since it only adds a fraction \( \rho \) of edges for each node \( v \) and each orbit-level \( N_{\ell+1}(v) \). In fact, for many sparse graphs (such as molecular ones) the model ends up adding a number of edges proportional to the number of nodes (see Section C.2 in the Appendix for a discussion and ablations). ### 6 EXPERIMENTS In this section, we validate our claims on a range of tasks and benchmarks. Beyond comparing the performance of LASER to existing baselines, we run ablations to address the following important questions: (1) Does LASER improve the graph’s connectivity? (2) Does LASER preserve locality information better than spectral rewiring approaches? (3) What is the impact of the fraction \( \rho \) of edges sampled? (4) What if we sample edges to be added from \( N_{\ell+1}(v) \) randomly, rather than optimally according to \( \mu \) in equation 8? (5) Is LASER scalable to large graphs? In the Appendix (Section C), we provide a density comparison between LASER and Multi-Hop GNNs, discuss our tie-breaking procedure that guarantees equivariance in expectation and further improves performance, provide an ablation using different underlying MPNNs, and discuss additional motivation for the need for locality. We also provide, in Section D, a more thorough scalability analysis. **Benchmarks.** We evaluate on the Long Range Graph Benchmark (LRGB) (Dwivedi et al., 2022) and TUDatasets (Morris et al., 2020). In the experiments, we fix the underlying model to GCN, but provide ablations with different popular MPNNs in the Appendix (Section C.3). For spatial curvature-based rewirings, we compare against SDRF (Topping et al., 2022) and BORF (Nguyen et al., 2023). For spectral techniques, we compare against FOSR (Karhadkar et al., 2022), a spectral gap rewiring technique, and GTR (Black et al., 2023), an effective resistance rewiring technique. We also compare to DiffWire (Arnaiz-Rodriguez et al., 2022), a differentiable rewiring technique. | Rewiring | Peptides-func Test AP ↑ | Peptides-struct Test MAE ↓ | PCQM-Contact Test MRR ↑ | |----------|-------------------------|---------------------------|------------------------| | None | 0.5930±0.0023 | 0.3496±0.0013 | 0.3234±0.0006 | | SDRF | 0.5947±0.0035 | 0.3404±0.0015 | 0.3249±0.0006 | | GTR | 0.5075±0.0029 | 0.3618±0.0010 | 0.3007±0.0022 | | FOSR | 0.5947±0.0027 | 0.3078±0.0026 | 0.2783±0.0008 | | BORF | 0.6012±0.0031 | 0.3374±0.0011 | TIMEOUT | | LASER | **0.6440±0.0010** | **0.3043±0.0019** | **0.3275±0.0011** | Based on Karhadkar et al. (2022) and the parallelism we draw between rewiring and multi-relational GNNs, for all techniques, we report results tuned over both a ‘standard’ and relational (Schlichtkrull et al., 2018) model for the baselines, where we assign original and rewired edges distinct relational types. In particular, R-GCN in these cases is then a special instance of equation 2. For additional details on the tasks and hyper-parameters, we refer to the Appendix (Section B). **LRGB.** We consider the Peptides (15,535 graphs) and PCQM–Contact (529,434 graphs) datasets, from the Long Range Graph Benchmark (LRGB). There are two tasks associated with Peptides, a peptide function classification task Peptides–func and a peptide structure regression task Peptides-struct. PCQM–Contact is a link-prediction task, in which the goal is to predict pairs of distant nodes that will be adjacent in 3D space. We replicate the experimental settings from Dwivedi et al. (2022), with a 5-layer MPNN for each of the rewirings as the underlying model. We choose the hidden dimension in order to respect the 500k parameter budget. In Table 2, we report the performance on the three tasks. LASER convincingly outperforms all baselines on the three tasks, while the other rewiring baselines frequently perform worse than the standard GCN model. On PCQM–Contact, the rewiring time for BORF surpasses the 60 hour limit enforced by Dwivedi et al. (2020) on our hardware, so we assign it a TIMEOUT score. **TUDatasets.** We evaluate LASER on the REDDIT–BINARY, IMDB–BINARY, MUTAG, ENZYMES, PROTEINS, and COLLAB tasks from TUDatasets, which were chosen by Karhadkar et al. (2022) under the claim that they require long-range interactions. We evaluate on 25 random splits, fixing the hidden dimension for all models to 64 and the number of layers to 4, as in Karhadkar et al. (2022). We avoid the use of dropout and use Batch Norm (Ioffe & Szegedy, 2015). We refer to the Appendix (Section B.2) for further details on the hyper-parameters and a discussion on some drawbacks of these tasks. Table 3 shows the results on the aforementioned benchmarks. LASER most consistently achieves the best classification accuracy, attaining the highest mean rank. Table 3: Accuracy ± std over 25 random splits for the datasets and rewirings. Colors highlight First, Second, and Third; we report the mean rank achieved on the valid runs. OOM is Out of Memory. | Rewiring | REDDIT–BINARY | IMDB–BINARY | MUTAG | ENZYMES | PROTEINS | COLLAB | Mean Rank | |----------|---------------|-------------|-------|---------|----------|--------|-----------| | None | 81.000±2.717 | **64.280±1.990** | 74.737±5.955 | 28.733±5.297 | 64.286±2.004 | 68.960±2.284 | 4.83 | | DiffWire | OOM | 59.000±3.847 | **80.421±9.707** | 28.533±4.475 | **72.714±2.946** | 65.440±2.177 | 4.83 | | GTR | **85.700±2.786** | 52.560±4.104 | 78.632±6.201 | 26.333±5.821 | **72.303±4.658** | 68.024±2.299 | 4.67 | | SDRF | 84.420±2.785 | 58.290±3.201 | 74.526±5.355 | **30.567±6.188** | 68.714±4.233 | **70.222±2.571** | 4.50 | | FOSR | **85.930±2.793** | 60.400±5.855 | 75.895±7.211 | 28.600±5.253 | 71.643±3.428 | **69.848±3.485** | 3.67 | | BORF | 84.920±2.534 | **60.820±3.877** | **81.684±7.964** | **30.500±6.593** | 68.411±4.122 | OOM | 3.60 | | LASER | **85.458±2.827** | **64.333±3.298** | **82.204±6.728** | **34.333±6.936** | **74.381±3.443** | **70.923±2.538** | 1.37 | **Ablation studies.** In the following, we choose FOSR as a typical spectral rewiring approach, while taking LASER with \( \rho = 1 \) as an instance of a dense, multi-hop GNN (i.e. classical spatial rewiring). For the purpose of these ablations, we conduct experiments on the Peptides dataset. We start by investigating questions (1) and (2), namely, how well LASER improves connectivity while respecting locality. To this end, we increment the number of snapshots from 2 to 5 given densities \( \rho = 0.1 \) and \( \rho = 1 \) for LASER and instead vary the number of edge additions of FOSR spanning the values 10, 20, 50, and 100. To assess the connectivity, we report the mean total effective resistance — which is a good proxy for over-squashing (Black et al., 2023; Di Giovanni et al., 2023) — while for the locality, we evaluate the norm of the difference between the original graph distance matrix and that of the rewired graph \( \| D_G - D_{R(G)} \|_F \) as per Proposition 5.2. Figure 2 shows the results of this ablation. We validate that the sparse LASER framework decreases the mean total effective resistance consistently over increasing snapshots as well as other rewiring techniques. Moreover, we find that LASER with \( \rho = 0.1 \) is better than dense spatial methods and especially surpasses spectral approaches at preserving information contained in the distance matrix. Next, we investigate question (3), i.e. the impact of the fraction \( \rho \) of edges being sampled, by increasing the number of snapshots from 2 to 5 and varying the density \( \rho \) ranging 0.1, 0.25, 0.5, and 1, with results reported in Figure 3. The majority of the performance gains are obtained through a sparse rewiring, as even with \( \rho = 0.1 \) Table 4: Comparison between LASER and random sampling, with \( L = 3 \) and \( \rho = 0.1 \). | Model | Peptides–func ↑ | Peptides–struct ↓ | |----------------|-----------------|------------------| | Random | 0.4796±0.0067 | 0.3382±0.0019 | | LASER | **0.6414±0.0020** | **0.3119±0.0005** | the performance is greatly increased over the baseline. The additional density in the orbits does seem to help with performance, but this comes at the cost of density. Finally, we address question (4), by evaluating how sampling edges uniformly over the nodes at distance \( l + 1 \) given a density \( \rho \), compares to our choice of prioritizing edges with lowest connectivity score \( \mu \) as per equation 8. We report the results in Table 4. We see that **LASER** greatly outperforms the random rewiring, verifying our claim that guiding the rewiring through \( \mu \) is a more sound approach. **Scalability.** The operations required to compute \( \mu \) and \( \nu \) in **LASER** are designed to be efficiently implemented on modern hardware accelerators, mostly relying on matrix multiplication. Furthermore, the rewiring operation is done once and stored for future runs. The \( \rho \) factor can be tuned to calibrate the density of the rewiring, giving further control on the training efficiency. **LASER** does not seem to significantly impact the run-time compared to the standard baseline models and we found through a synthetic benchmarking experiment that our implementation of **LASER** is able to rewire graphs with 100k nodes and a million edges in 2 hours. This is in contrast to FOSR and SDRF that failed to finish the computation within 24 hours. We report a large number of benchmarking experiments, alongside a theoretical complexity analysis in the Appendix (Section D). ### 7 CONCLUSION In this work, we have identified shortcomings of rewiring techniques and argued that a rewiring must: (i) improve connectivity, (ii) respect locality, and (iii) preserve sparsity. Unlike current spectral and spatial rewirings that compromise some of these properties, we have outlined a general rewiring paradigm that meets criteria (i)–(iii) by interpolating between the input graph and a better connected one via locally constrained sequential rewiring. We have then proposed a specific instance of this paradigm — **LASER** — and verified, both theoretically and empirically, that it satisfies (i)-(iii). **Limitations and Future Work.** In this paper, we considered a simple instance of the general rewiring paradigm outlined in Section 4, but we believe that an interesting research direction would be to explore alternative choices for both the connectivity and locality measures, ideally incorporating features in a differentiable pipeline similar to Arnaiz-Rodríguez et al. (2022). Furthermore, the identification between graph-rewiring on the one hand, and multi-relational GNNs and temporal-GNNs on the other, could lead to interesting connections between the two settings, both theoretically (e.g., what is the expressive power of a certain rewiring policy?) and practically, where techniques working in one case could be effortlessly transferred to the other. Finally, we highlight that, as is customary in rewiring approaches, it is always hard to pinpoint with certainty the reason for any performance improvement, including whether such an improvement can be truly credited to over-squashing and long-range interactions. We have tried to address this point through multiple ablations studies. ACKNOWLEDGEMENTS FdG, FB, and MB are partially supported by the EPSRC Turing AI World-Leading Research Fellowship No. EP/X040062/1. We would like to thank Google Cloud for kindly providing computational resources for this work. REFERENCES Ralph Abboud, Radoslav Dimitrov, and Ismail Ilkan Ceylan. Shortest path networks for graph property prediction. In The First Learning on Graphs Conference, 2022. URL https://openreview.net/forum?id=mWzWvMxuFg1. Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In international conference on machine learning, pp. 21–29. PMLR, 2019. Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In International Conference on Learning Representations, 2021. Adrián Arnaiz-Rodríguez, Ahmed Begga, Francisco Escolano, and Nuria Oliver. DiffWire: Inductive Graph Rewiring via the Lovász Bound. In The First Learning on Graphs Conference, 2022. URL https://openreview.net/pdf?id=IXvfIex0mX6f. Pradeep Kr Banerjee, Kedar Karhadkar, Yu Guang Wang, Uri Alon, and Guido Montúfar. Oversquashing in gnns through the lens of information contraction and graph expansion. In Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 1–8. IEEE, 2022. Pablo Barceló, Egor V Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan Pablo Silva. The logical expressiveness of graph neural networks. In International Conference on Learning Representations, 2019. Pablo Barcelo, Mikhail Galkin, Christopher Morris, and Miguel Romero Orth. Weisfeiler and leman go relational. In The First Learning on Graphs Conference, 2022. URL https://openreview.net/forum?id=wY_IYhh6pqj. Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. 2018. Mitchell Black, Zhengchao Wan, Amir Nayyeri, and Yusu Wang. Understanding oversquashing in gnns through the lens of effective resistance. In International Conference on Machine Learning, pp. 2528–2547. PMLR, 2023. Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Lio, Guido F Montufar, and Michael Bronstein. Weisfeiler and lehman go cellular: Cw networks. In Advances in Neural Information Processing Systems, volume 34, pp. 2625–2640, 2021. Rickard Brüel-Gabrielsson, Mikhail Yurochkin, and Justin Solomon. Rewiring with positional encodings for graph neural networks. arXiv preprint arXiv:2201.12674, 2022. Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. In International Conference on Learning Representations, 2014. Chen Cai, Truong Son Hy, Rose Yu, and Yusu Wang. On the connection between mpnn and graph transformer. arXiv preprint arXiv:2301.11956, 2023. Ashok K Chandra, Prabhakar Raghavan, Walter L Ruzzo, Roman Smolensky, and Prasoon Tiwari. The electrical resistance of a graph captures its commute and cover times. computational complexity, 6(4):312–340, 1996. Andreea Deac, Marc Lackenby, and Petar Veličković. Expander graph propagation. In The First Learning on Graphs Conference, 2022.
Svy1XoOLXj
Is the motivation that AdaLoRA can overfit to training set? If overfitting happens, can you just prune the singular values more aggressively? What happen if you just use larger weight decay in AdaLoRA?
BiLoRA: A Bi-level Optimization Framework for Low-Rank Adapters Anonymous authors Paper under double-blind review Abstract Low-rank adaptations (LoRA) are widely employed for fine-tuning large-scale pretrained models in downstream tasks, by learning low-rank incremental matrices. LoRA and its variants such as AdaLoRA train an entire low-rank incremental matrix on a single training dataset, which often leads to overfitting to training data and inferior generalization on test data. To address this problem, we propose a bi-level optimization (BLO) based method for alleviating overfitting. Our method parameterizes a low-rank incremental matrix in a pseudo singular value decomposition form, and separates the training of pseudo singular vectors and values onto different data subsets in different optimization problems. This separation alleviates the risk of overfitting to a single dataset and improves generalization on other data. Specifically, in the lower level of our BLO formulation, we train the pseudo singular vectors on a subset of the training data. In the upper level, we learn the pseudo singular values on the other subset of the training data. The two levels of optimization problems are mutually dependent on each other and solved jointly. On ten datasets from natural language understanding and generation tasks and on various popular large pretrained models, our method achieves significantly better performance than LoRA, AdaLoRA, and other fine-tuning baseline methods with similar amounts of trainable parameters. 1 Introduction Large language models (LLMs) have achieved excellent performance across various natural language processing tasks (Devlin et al., 2018; He et al., 2020; Radford et al., 2019; Brown et al., 2020). The prevalent paradigm for leveraging large language models in application development involves pretraining on large-scale data and subsequently fine-tuning the pretrained model on specific downstream tasks. With the ever-increasing size of large language models, full fine-tuning (Qiu et al., 2020) them on various downstream tasks can cause significant computation costs. In addition, the large number of parameters in pre-trained models may make the fine-tuning process more prone to overfitting (Karimi Mahabadi et al., 2021). Researchers have proposed multiple fine-tuning methods to address these issues. These methods, aiming to reduce the parameter count during fine-tuning while maintaining performance, can be collectively referred to as Parameter-Efficient Fine-Tuning (PEFT) methods (Houlsby et al., 2019; Ding et al., 2023; Mao et al., 2021). Low-Rank Adaptation (LoRA) (Hu et al., 2021) is one of the important methods for PEFT. Different from adapter tuning (Houlsby et al., 2019; Rebuffi et al., 2017; Pfeiffer et al., 2020), LoRA does not add small neural modules to the pre-trained model. LoRA takes inspiration from Li et al. (2018); Agnhajanyan et al. (2020) which show that well trained over-parameterized models actually exist within a space characterized by a low intrinsic dimension. It introduces incremental updates named low-rank adapters to frozen pre-trained weights and parameterizes them in the form of the product of two much smaller matrices. For $h = W_0x$, the modified forward pass yields: $h = W_0x + \Delta Wx = W_0x + BAx$, where $\Delta W \in \mathbb{R}^{d \times k}$, $A \in \mathbb{R}^{d \times r}$, $B \in \mathbb{R}^{r \times k}$ and $r \ll \min\{d, k\}$. With much less trainable parameters, LoRA achieves comparable or even better performance than full fine-tuning and other adaptation methods (Hu et al., 2021). LoRA sets the rank of incremental matrices at different layers to be the same, without considering the fact that pretrained weight matrices in different layers have varying importance for a downstream task. A more important weight matrix should be finetuned more, with a larger number of weight parameters (equivalently, a larger rank) in its incremental matrix. To address this issue, AdaLoRA (Zhang et al., 2023) sets different ranks for incremental matrices at different layers adaptively according to layers’ importance. It parameterizes a low-rank incremental matrix $\Delta W$ as $\Delta W = P \Lambda Q$ to mimic SVD. With regularization to enforce the orthogonality of $P$ and $Q$, $\Lambda$ can be approximately considered as a singular value matrix. AdaLoRA uses singular values and vectors to compute important scores for determining how to set layer-specific ranks. One limitation of AdaLoRA is that it learns pseudo singular vectors in $\{P, Q\}$ and pseudo singular values in $\Lambda$ simultaneously by minimizing the fine-tuning loss on a single training dataset. This often results in overfitting to the training data and unsatisfactory generalization on test data. Particularly, $\Lambda$ determines the number of learnable parameters and the contribution of each rank-1 update matrix (outer product of two pseudo singular vectors) in $\Delta W$. Learning $\Lambda$ by minimizing a single dataset’s training loss can easily render these contributions and parameter amounts tailored to this dataset, leading to inferior generalization performance on other data. To address this problem, we propose a bi-level optimization (BLO) based method to learn $\{P, Q\}$ and $\Lambda$ on different subsets of the training data. A BLO formulation (Sinha et al., 2017) consists of two levels of nested optimization problems. The optimal variables in the lower level are the inputs of the objective function in the upper level. The non-optimal variables in the upper level are the inputs of the objective function in the lower level. In the lower level of our formulation, we train $\{P, Q\}$ by minimizing a fine-tuning loss on a subset $S$ of the training dataset $D$ while tentatively fixing $\Lambda$. The optimally learned $\{P^*(\Lambda), Q^*(\Lambda)\}$ are functionals of $\Lambda$. In the upper level, we validate $\{P^*(\Lambda), Q^*(\Lambda)\}$ on the rest of the training data $D \setminus S$. The validation loss is a function of $\Lambda$ and we learn $\Lambda$ by minimizing this loss. By separating the learning of $\{P, Q\}$ and $\Lambda$ onto different data subsets in different optimization problems, our method can effectively alleviate overfitting to a single dataset and improve generalization performance to other datasets. Our contributions can be summarized as follows: - We propose a novel bi-level optimization based method to alleviate overfitting in LoRA and its variants. Different from previous methods which learn an entire incremental matrix on a single dataset, our method separates the learning of parameter subsets onto different datasets in different optimization problems which are tightly coupled. In this way, our method can effectively alleviate overfitting to a single dataset. - We demonstrate the effectiveness of our method on ten datasets in both natural language understanding and generation tasks and on various pretrained large models including RoBERTa, DeBERTa, and GPT2. Compared with LoRA, AdaLoRA and other popular fine-tuning methods, our method achieves significantly better performance with similar amounts of trainable parameters. 2 RELATED WORK Low-Rank Adaptation. Li et al. (2018) and Aghajanyan et al. (2020) demonstrate that widely-used pre-trained models possess a very low intrinsic dimension and it is possible to achieve comparable fine-tuning performance by utilizing a reparameterization with reduced dimensionality. This inspires low-rank adapters to be introduced for fine-tuning. LoRA introduces incremental updates to frozen pre-trained weights as low-rank adapters (Hu et al., 2021). By parameterizing the low-rank adapter as the product of two low-rank matrices, LoRA greatly reduces trainable parameters while maintaining or even improving the performance over full fine-tuning. Multiple methods have been proposed to improve the time/memory efficiency and performance of low-rank adapters based on LoRA. DyLoRA (Valipour et al., 2022) trains low-rank adapter blocks for multiple ranks by sorting the learned representations dynamically during training. QLoRA (Dettmers et al., 2023) introduces multiple strategies to reduce memory footprint for low-rank adapters, lowering the memory barrier for training LLMs. LoraHub (Huang et al., 2023) is designed to facilitate the efficient combination of LoRA modules trained on various tasks using only a few examples from a new task. AdaLoRA (Zhang et al., 2023) allocates the parameter budget adaptively according to the importance of modules to improve the fine-tuning performance in specific budget settings. It parameterizes the incremental updates in the form of singular value decomposition and iteratively prunes singular values in correspondence to their importance scores during training. Different from these existing methods which train all the parameters in incremental updates on a single training dataset and therefore often lead to overfitting, our method (based on the SVD reparameterization of incremental updates) separately train singular values and singular vectors in two different optimization levels, which effectively alleviates the risk of overfitting to a single dataset. **Bi-level Optimization (BLO).** BLO has gained much attention for formulating various machine learning methods including meta-learning [Finn et al., 2017; Rajeswaran et al., 2019], hyperparameter optimization [Franceschi et al., 2017; Lorraine et al., 2020], neural architecture search [Liu et al., 2018; Zhang et al., 2021], reinforcement learning [Rajeswaran et al., 2020], to name a few. In addition to applying BLO to various machine learning problems, various algorithms have been proposed to address this specific form of optimization problem, including zeroth-order methods like Bayesian optimization [Cui & Bai, 2019], first-order algorithms based on hypergradients [Pearlmutter & Siskind, 2008; Lorraine et al., 2020], etc. Gradient-based BLO is efficient for scaling up to high-dimensional problems with a large number of trainable parameters. We expand the application scenarios of gradient-based BLO and build an efficient training framework to improve the generalization performance of low-rank adapters. ### 3 METHODS We propose BiLoRA (Figure 1), a novel low-rank adapter training framework based on bi-level optimization. Similar to AdaLoRA, incremental matrices in our method are parameterized in a pseudo SVD form with learnable pseudo singular vectors $\mathcal{V}$ and pseudo singular values $\mathcal{E}$. We split the training dataset into two non-overlapping subsets $D_1$ and $D_2$. In the lower level, we train $\mathcal{V}$ on $D_1$ while fixing $\mathcal{E}$. The optimal solution $\mathcal{V}^*(\mathcal{E})$ (which is a functional of $\mathcal{E}$) is fed into the upper level. In the upper level, we train $\mathcal{E}$ on the dataset $D_2$. The updated $\mathcal{E}$ is fed into the lower level. The two levels of optimization problems are solved iteratively until convergence. #### 3.1 Parameterization of Low-Rank Incremental Matrices Following [Zhang et al., 2023], we parameterize the low-rank incremental matrix $\Delta W$ as $\Delta W = P \Lambda Q$, which mimics SVD. The diagonal matrix $\Lambda$ contains pseudo singular values and the approximately orthogonal matrices $P$ and $Q$ represent pseudo left/right singular vectors. We use $k$ to index the incremental matrix, i.e., $\Delta W_k = P_k \Lambda_k Q_k$ for $k = 1, \ldots, n$, where $n$ is the number of low-rank adapters. We denote the $i$-th singular value of $\Delta W_k$ as $\lambda_{k,i}$ and the rank of low-rank adapters as $r$. We further denote the parameter sets as $\mathcal{P} = \{P_k\}_{k=1}^n$, $\mathcal{E} = \{\Lambda_k\}_{k=1}^n$, $\mathcal{Q} = \{Q_k\}_{k=1}^n$, and $\mathcal{V} = \{\mathcal{P}, \mathcal{Q}\}$. To encourage $P_k$ and $Q_k$ to be approximately orthogonal, we use the following regularizer as in AdaLoRA [Zhang et al., 2023]: $$R_1 = \sum_{k=1}^{n} (\|P_k^T P_k - I\|_F^2 + \|Q_k Q_k^T - I\|_F^2),$$ where $I$ is an identity matrix and $\|\cdot\|_F$ denotes the Frobenius norm. **Parameterization of Pseudo Singular Values.** We parameterize the pseudo singular values in $\Lambda$ in three specific forms. - **Real-Value:** All pseudo singular values are real-valued without any constraints. - **Softmax:** Given a real vector $v$, we apply the softmax operation to it. $\text{softmax}(v)$ are used as the pseudo singular values. These values add up to one and represent the contributions of their corresponding singular vector pairs. • **Approximately Binary**: Given a real vector \( v \), we apply element-wise sigmoid to it to transform the values in \( v \) into \((0, 1)\). Then we use an element-wise entropy regularizer to encourage the values in \( \text{sigmoid}(v) \) are close to either zero or one. The regularizer is defined as: \[ R_2(\mathcal{E}) = \sum_{k=1}^{n} \sum_{i=1}^{r} \lambda_{k,i} \log \lambda_{k,i} + (1 - \lambda_{k,i}) \log(1 - \lambda_{k,i}). \] This setting automatically assigns either a high or low importance to each singular vector pair with the corresponding singular value as zero or one, effectively serving as an automatic rank selection mechanism. ### 3.2 A Bi-level Optimization Framework Our method is based on bi-level optimization, where pseudo singular vector matrices \( \mathcal{V} \) and their corresponding pseudo singular value matrices \( \mathcal{E} \) are set as trainable parameters for the lower and upper level respectively. **Lower Level.** In the lower level, we perform LoRA fine-tuning of a pre-trained model by minimizing a loss \( C \) defined on the first dataset \( D_1 \) and low-rank incremental matrices \( \{\Delta W_k\}_{k=1}^r \). Calculating \( C \) involves the forward pass for each input example \( x: W_0x + \Delta Wx = W_0x + P\Lambda Qx \), where \( W_0 \) is a weight matrix in the pretrained model. \( R_1 \) in Eq.(1) is applied to promote the approximate orthogonality of \( P \) and \( Q \). The overall training objective is \( L_1 = C(\mathcal{V}, \mathcal{E}; D_1) + \gamma_1 R_1(\mathcal{V}) \), where \( \gamma_1 \) is a tradeoff parameter. In this level, we only train \( \mathcal{V} \), while keeping \( \mathcal{E} \) tentatively fixed. \( \mathcal{E} \) will be updated in the upper level. In the end, the lower level amounts to solving the following problem: \[ \mathcal{V}^*(\mathcal{E}) = \arg \min_{\mathcal{V}} C(\mathcal{V}, \mathcal{E}; D_1) + \gamma_1 R_1(\mathcal{V}). \] \( \mathcal{V}^*(\mathcal{E}) \) denotes that the optimal solution \( \mathcal{V}^* \) depends on \( \mathcal{E} \) since \( \mathcal{V}^* \) depends on \( C \) which depends on \( \mathcal{E} \). **Upper Level.** In the upper level, we validate the fine-tuned model where the incremental matrices are parameterized by the optimally learned \( \mathcal{V}^*(\mathcal{E}) \) and unlearned pseudo singular values in \( \mathcal{E} \), on the second dataset \( D_2 \). This results in a validation loss \( C(\mathcal{V}^*(\mathcal{E}), \mathcal{E}; D_2) \), which is a function of \( \mathcal{E} \). We learn \( \mathcal{E} \) by minimizing this loss. Optionally, we use the regularizer \( R_2 \) in Eq.(2) to encourage the pseudo singular values in \( \mathcal{E} \) to be approximately binary. The overall objective function is \( L_2 = C(\mathcal{V}^*(\mathcal{E}), \mathcal{E}; D_2) + \gamma_2 R_2(\mathcal{E}) \), where \( \gamma_2 \) is a tradeoff parameter. This level amounts to solving the following optimization problem: \[ \min_{\mathcal{E}} C(\mathcal{V}^*(\mathcal{E}), \mathcal{E}; D_2) + \gamma_2 R_2(\mathcal{E}). \] **A Bi-level Optimization Framework.** Integrating these two interdependent levels of optimization problems, we have the following bi-level optimization framework: **Upper Level:** \( \min_{\mathcal{E}} C(\mathcal{V}^*(\mathcal{E}), \mathcal{E}; D_2) + \gamma_2 R_2(\mathcal{E}) \) **Lower Level:** \( s.t. \mathcal{V}^*(\mathcal{E}) = \arg \min_{\mathcal{V}} C(\mathcal{V}, \mathcal{E}; D_1) + \gamma_1 R_1(\mathcal{V}) \) Note that these two levels of optimization problems are mutually dependent on each other. The output of the lower level, which is \( \mathcal{V}^*(\mathcal{E}) \), is the input of the upper level. The optimization variable \( \mathcal{E} \) in the upper level is the input of the lower level. By solving these two interconnected problems jointly, we can learn the pseudo singular vectors and values end-to-end. **Optimization Algorithm.** We utilize a gradient-based optimization algorithm (Choe et al., 2022) to solve this bi-level optimization problem. Our overall optimization algorithm is summarized in Algorithm 1. Specifically, in the lower level, we perform gradient descent for a preset number of steps \( T_1 \) on the pseudo singular vector matrices \( \mathcal{V} \) to approximate the optimal solution \( \mathcal{V}^*(\mathcal{E}) \). With the initial \( \mathcal{V} \) as \( \mathcal{V}^{(0)} \) and learning rate \( \eta_1 \), the gradient descent steps can be formulated as: \[ \mathcal{V}^{(t)} = \mathcal{V}^{(t-1)} - \eta_1 \frac{dL_1}{d\mathcal{V}^{(t-1)}}, \quad \text{for } t = 1, 2, 3, ..., T_1. \] Table 1: RoBERTa\textsubscript{base/large} (R\textsubscript{bl}) with different adaptation methods on the GLUE benchmark. We report the average result of five runs with different random seeds. Higher is better for all metrics. Numbers except BiLoRA are published in prior works. * indicates model already adapted to MNLI when adapting to MRPC, RTE, and STS-B, while † indicates model started as pre-trained when adapting to all datasets. | Method | Params | MNLI | SST-2 | MRPC | CoLA | QNLI | QQP | RTE | STS-B | Avg. | |-----------------|--------|------|-------|------|------|------|-----|-----|-------|------| | R\textsubscript{b}(FT) | 125.0M | 87.6 | 94.8 | 90.2 | 63.6 | 92.8 | \textbf{91.9} | 78.7 | 91.2 | 86.4 | | R\textsubscript{b}(BitFit) | 0.1M | 84.7 | 93.7 | \textbf{92.7} | 62.0 | 91.8 | 84.0 | 81.5 | 90.8 | 85.2 | | R\textsubscript{b}(Adpt\textsuperscript{D}) | 0.3M | 87.1±0.1 | 94.2±1.1 | 88.5±1.1 | 60.8±4.1 | 93.1±1.1 | 90.2±0.0 | 71.5±2.7 | 89.7±3.1 | 84.4 | | R\textsubscript{b}(Adpt\textsuperscript{H}) | 0.9M | 87.3±1.1 | 94.7±3.1 | 88.4±1.1 | 62.6±9.1 | 93.0±2.0 | 90.6±0.0 | 75.9±2.2 | 90.3±1.1 | 85.4 | | R\textsubscript{b}(LoRA)* | 0.3M | 87.5±3.3 | \textbf{95.1±2.2} | 89.7±7.7 | 63.4±12.1 | \textbf{93.3±3.3} | 90.8±1.1 | 86.6±7.7 | 91.5±2.2 | 87.2 | | R\textsubscript{b}(BiLoRA)* | 0.3M | 87.9±2.2 | \textbf{95.1±2.2} | 91.7±3.3 | \textbf{64.8±6.6} | \textbf{93.3±2.2} | 91.4±3.3 | \textbf{87.2±4.4} | \textbf{91.7±2.2} | \textbf{87.9} | | R\textsubscript{r}(FT)* | 355.0M | 90.2 | 96.4 | 90.9 | 68.0 | 94.7 | \textbf{92.2} | 86.6 | 92.4 | 88.9 | | R\textsubscript{r}(LoRA)* | 0.8M | \textbf{90.6±2.2} | 96.2±5.2 | 90.9±1.2 | 68.2±1.9 | 94.9±3.3 | 91.6±1.1 | 87.4±2.5 | \textbf{92.6±2.2} | 89.0 | | R\textsubscript{r}(BiLoRA)* | 0.8M | \textbf{90.6±3.3} | \textbf{96.7±4.4} | \textbf{92.6±1.4} | \textbf{69.2±1.6} | \textbf{95.0±1.1} | \textbf{92.0±1.1} | \textbf{89.5±1.1} | \textbf{92.6±3.8} | \textbf{89.8} | | R\textsubscript{r}(Adpt\textsuperscript{D})† | 3.0M | 90.2±3.3 | 96.1±3.3 | 90.2±7.7 | 68.3±1.0 | 94.8±2.2 | 91.9±1.1 | 83.8±2.9 | 92.1±1.7 | 88.4 | | R\textsubscript{r}(Adpt\textsuperscript{H})† | 0.8M | 90.5±3.3 | 96.6±2.2 | 89.7±1.2 | 67.8±2.5 | 94.8±3.3 | 91.7±2.2 | 80.1±2.9 | 91.9±4.4 | 87.9 | | R\textsubscript{r}(Adpt\textsuperscript{H})† | 6.0M | 89.9±5.3 | 96.2±3.3 | 88.7±2.9 | 66.5±4.4 | 94.7±2.2 | \textbf{92.1±1.1} | 83.4±1.1 | 91.0±1.7 | 87.8 | | R\textsubscript{r}(Adpt\textsuperscript{H})† | 0.8M | 90.3±3.3 | 96.3±3.3 | 87.7±1.7 | 66.3±2.0 | 94.7±2.2 | 91.5±1.1 | 72.9±2.9 | 91.5±5.5 | 86.4 | | R\textsubscript{r}(LoRA)† | 0.8M | \textbf{90.6±2.2} | 96.2±5.2 | 90.2±1.0 | 68.2±1.9 | 94.8±3.3 | 91.6±1.1 | 85.2±1.1 | 92.3±5.5 | 88.6 | | R\textsubscript{r}(BiLoRA)† | 0.8M | \textbf{90.6±3.3} | \textbf{96.7±4.4} | \textbf{92.2±1.0} | \textbf{69.2±1.6} | \textbf{95.0±1.1} | \textbf{92.0±1.1} | \textbf{87.4±1.0} | \textbf{92.6±3.8} | \textbf{89.5} | We plug \( V^*(E) \approx V(T_1) \) into the overall objective function in the upper level and get an approximate objective \( \hat{L}_2 = C(V(T_1), E; D_2) + \gamma_2 R_2(E) \). We perform gradient descent for a preset number of steps \( T_2 \) on the pseudo singular values in \( E \) to minimize \( \hat{L}_2 \). With the initial \( E \) as \( E^{(0)} \) and learning rate \( \eta_2 \), the gradient descent steps can be formulated as: \[ E^{(t)} = E^{(t-1)} - \eta_2 \frac{d\hat{L}_2}{dE^{(t-1)}}, \quad \text{for } t = 1, 2, 3, ..., T_2. \] These steps constitute one global optimization step. We take iterative global steps between the lower level and upper level to solve this bi-level optimization problem until converge. Specifically, following the chain rule, the hypergradient for the upper level can be calculated as: \[ \frac{d\hat{L}_2}{dE} = \frac{\partial \hat{L}_2}{\partial E} + \frac{\partial V(T_1)}{\partial E} \times \frac{\partial \hat{L}_2}{\partial V(T_1)}. \] **Algorithm 1 BiLoRA** 1: **Input:** Datasets \( D_1, D_2 \); unroll steps \( T_1, T_2 \); learning rates \( \eta_1, \eta_2 \). 2: **In a Global Step do** 3: **for** \( t = 1, 2, 3, ..., T_1 \) **do** 4: Sample a minibatch \( B_1^{(t)} \) from \( D_1 \) 5: Compute \( \frac{dL_1}{dV^{(t-1)}} \) on \( B_1^{(t)} \) and update \( V^{(t)} = V^{(t-1)} - \eta_1 \frac{dL_1}{dV^{(t-1)}} \) 6: **for** \( t = 1, 2, 3, ..., T_2 \) **do** 7: Sample a minibatch \( B_2^{(t)} \) from \( D_2 \) 8: Compute \( \frac{d\hat{L}_2}{dE^{(t-1)}} = \frac{\partial \hat{L}_2}{\partial E^{(t-1)}} + \frac{\partial V(T_1)}{\partial E^{(t-1)}} \times \frac{\partial \hat{L}_2}{\partial V(T_1)} \) and update \( E^{(t)} = E^{(t-1)} - \eta_2 \frac{d\hat{L}_2}{dE^{(t-1)}} \) 9: **end this step** ### 4 EXPERIMENTS We evaluated the downstream performance of BiLoRA on RoBERTa (Liu et al., 2019), DeBERTa (He et al., 2020) and GPT-2 (Radford et al., 2019), and compared with LoRA (Hu et al., 2021), AdaLoRA (Zhang et al., 2023), and other baselines. Our experiments covered a wide range of tasks, from natural language understanding (NLU) to generation (NLG). Specifically, we evaluated Table 2: DeBERTa-v3-base (Dv3) with different adaptation methods, on the GLUE benchmark. We report the average result of five runs with different random seeds. Higher is better. * indicates numbers published in prior works. BiLoRA outperforms FT, LoRA, AdaLoRA, and other adaptation methods with equal or less parameters. | Method | Params | MNLI | SST-2 | MRPC | CoLA | QNLI | QQP | RTE | STS-B | Avg. | |-------------------------|--------|------|-------|------|------|------|-----|-----|-------|------| | Dv3(FT)* | 184.0M | 90.01| 95.63 | 89.46| 69.19| 94.03| 92.40| 83.75| 91.60 | 88.09| | Dv3(AdptH)* | 0.6M | 90.18| 95.30 | 89.22| 67.87| 93.76| 91.65| 85.56| 91.30 | 87.93| | Dv3(AdptP)* | 0.6M | 90.22| 95.53 | 89.22| 69.48| 93.98| 91.62| 84.12| 91.52 | 88.04| | Dv3(LoRA)* | 0.3M | 90.34| 94.95 | 89.71| 68.71| 94.03| 91.61| 85.56| 91.68 | 88.15| | Dv3(AdaLoRA)* | 0.3M | 90.68| 95.80 | 90.44| 70.04| 94.49| 91.78| 87.36| 91.63 | 88.86| | Dv3(BiLoRA) | 0.3M | 90.81| 96.02 | 91.42| 70.52| 94.25| 91.82| 88.45| 91.96 | 89.41| RoBERTa and DeBERTa on the GLUE benchmark (Wang et al., 2018) and GPT-2 on the E2E NLG challenge (Novikova et al., 2017). We used DeBERTa-xxlarge(1.5B) to evaluate the scaling-up performance of our method. We used NVIDIA A100 for all experiments. 4.1 Baselines We compared with the same baselines as LoRA and AdaLoRA, and used the reported results in previous work. Additionally, we also took LoRA and AdaLoRA as our baselines to evaluate the effectiveness of our method. **Full Fine-Tuning (FT)** is a frequently employed method for adaptation. The model is initialized with pre-trained weights and biases and all model parameters are subjected to gradient updates. We also included a simple variant reported in prior work on GPT-2 (Li & Liang, 2021), which only adapts the last two layers while freezing others. **Bias-only or BitFit** (Zaken et al., 2021) is an effective PEFT method which only trains the bias vectors while freezing everything else in the pre-trained model. **Prefix-embedding tuning (PreEmbed)** introduces specialized tokens within the input tokens, featuring trainable word embeddings that typically do not belong to the model’s vocabulary (Li & Liang, 2021). **Prefix-layer tuning (PreLayer)** learns the activations after every Transformer layer by replacing the activations computed from previous layers with trainable parameters. This method can be seen as an extension to prefix-embedding tuning. **Adapter tuning** (Houlsby et al., 2019) inserts layer-adapters between neural modules such as the MLP module or the self-attention module. We used four types of adapters as in LoRA (Hu et al., 2021): **AdapterL** with the adapter layer applied only after the MLP module and after a LayerNorm (Lin et al., 2020), **AdapterD** with some adapter layers dropped for increasing efficiency (Rücklé et al., 2020), **AdapterH** incorporates two fully connected layers within an adapter layer, with non-linearity in between (Houlsby et al., 2019), **AdapterP** (Pfeiffer et al., 2020) is similar to **AdapterL**, but introduces a novel two-stage transfer learning strategy to combine the knowledge from multiple source tasks. **LoRA** (Hu et al., 2021) adds trainable incremental update matrices to pretrained weight matrices. Following the experimental settings of LoRA, we applied BiLoRA to $W_q$ and $W_v$ matrices (the query and value weight matrices in the self-attention module) for a fair comparison. **AdaLoRA** (Zhang et al., 2023) proposes SVD-based adaptation and rank-allocation based on LoRA, which formulates the incremental matrices in the form of singular value decomposition and allocates rank budget based on importance scores. 4.2 Natural Language Understanding For natural language understanding (NLU) tasks, we conducted experiments on the General Language Understanding Evaluation (GLUE) benchmark for RoBERTa and DeBERTa. Please see Appendix A for more details on the models and datasets we use. Table 3: GPT-2 medium (M) and large (L) with different adaptation methods on the E2E NLG Challenge. For all metrics, higher is better. * indicates numbers published in prior works. We keep the same experimental settings as different adaptation baselines for a fair comparison. | Model&Method | Params | BLEU | NIST | MET | ROUGE-L | CIDEr | |--------------------|----------|---------|---------|---------|---------|-------| | GPT-2 M(FT)* | 354.92M | 68.2 | 8.62 | 46.2 | 71.0 | 2.47 | | GPT-2 M(AdptL)* | 0.37M | 66.3 | 8.41 | 45.0 | 69.8 | 2.40 | | GPT-2 M(AdptL*) | 11.09M | 68.9 | 8.71 | 46.1 | 71.3 | 2.47 | | GPT-2 M(AdptH)* | 11.09M | $67.3_{\pm 6}$ | $8.50_{\pm 0.07}$ | $46.0_{\pm 2}$ | $70.7_{\pm 2}$ | $2.44_{\pm 0.01}$ | | GPT-2 M(FTtop2)* | 25.19M | 68.1 | 8.59 | 46.0 | 70.8 | 2.41 | | GPT-2 M(PreLayer)* | 0.35M | 69.7 | 8.81 | 46.1 | 71.4 | 2.49 | | GPT-2 M(LoRA)* | 0.35M | $70.4_{\pm 1}$ | $8.85_{\pm 0.02}$ | $46.8_{\pm 2}$ | $71.8_{\pm 1}$ | $2.53_{\pm 0.02}$ | | GPT-2 M(BiLoRA) | 0.35M | $70.5_{\pm 4}$ | $8.86_{\pm 0.03}$ | $46.9_{\pm 1}$ | $72.0_{\pm 2}$ | $2.54_{\pm 0.03}$ | | GPT-2 L(FT)* | 774.03M | 68.5 | 8.78 | 46.0 | 69.9 | 2.45 | | GPT-2 L(AdptL)* | 0.88M | $69.1_{\pm 1}$ | $8.68_{\pm 0.03}$ | $46.3_{\pm 0}$ | $71.4_{\pm 2}$ | $2.49_{\pm 0}$ | | GPT-2 L(AdptL*) | 23.00M | $68.9_{\pm 3}$ | $8.70_{\pm 0.04}$ | $46.1_{\pm 1}$ | $71.3_{\pm 2}$ | $2.45_{\pm 0.02}$ | | GPT-2 L(PreLayer)* | 0.77M | 70.3 | 8.85 | 46.2 | 71.7 | 2.47 | | GPT-2 L(LoRA)* | 0.77M | $70.4_{\pm 1}$ | $8.89_{\pm 0.02}$ | $46.8_{\pm 2}$ | $72.0_{\pm 2}$ | $2.47_{\pm 0.02}$ | | GPT-2 L(BiLoRA) | 0.77M | $70.5_{\pm 3}$ | $8.90_{\pm 0.04}$ | $47.0_{\pm 3}$ | $72.0_{\pm 4}$ | $2.49_{\pm 0.03}$ | **Implementation Details.** Our implementation is based on *Huggingface Transformers* (Wolf et al., 2019) and *Betty* (Choe et al., 2022). *Betty* is a software library for solving large-scale multilevel optimization (MLO) problems. Specifically, we load RoBERTa and DeBERTa models with *Huggingface Transformers* and build our bi-level optimization framework with *Betty*. **Experimental Settings.** Following LoRA, we used the development set in GLUE as test data since the test set is not publicly available. We divided the training set into two datasets, with an 8:2 split, serving as the lower-level and upper-level datasets respectively in our bi-level formulation. We maintained this fixed ratio for all tasks. Singular values were parameterized as Softmax if not otherwise stated and $R_1$ was added to the lower level as a regularizer. For RoBERTa base/large, we kept our experimental settings the same as LoRA. For DeBERTa-v3-base, we kept our experimental settings close to AdaLoRA while maintaining a lower parameter budget. We also kept hyperparameters such as sequence length, total batch size, LoRA rank, and LoRA alpha exactly the same as LoRA/AdaLoRA where necessary. These experimental settings allow for a fair comparison with all baseline methods. Please see the Appendix for all the hyperparameter settings. **Main Results.** The same as LoRA, we report the overall (matched and mismatched) accuracy for MNLI, Matthew’s correlation for CoLA, Pearson correlation for STS-B, and accuracy for the other tasks. Table 1 shows the results of RoBERTa base/large on the GLUE development set. As can be seen, our method outperforms LoRA on all datasets with the same number of trainable parameters. On most datasets, our method achieves better or on par performance compared with baselines. The average score of BiLoRA notably outperforms all the baselines. Table 2 shows the results of DeBERTa-v3-base on the GLUE development set. BiLoRA outperforms all baselines with equal or less trainable parameters. The improvements achieved by our method over baselines are attributed to its bi-level learning mechanism which separates the training of pseudo singular vectors and values on two distinct datasets. As a result, it effectively alleviates the risk of overfitting to one dataset and yields better generalization performance. In contrast, baseline methods train all parameters on the same dataset and thus lead to overfitting to this dataset. This is particularly evidenced by the observation that on smaller datasets such as CoLA, RTE, and MRPC where overfitting is more likely to occur, BiLoRA outperforms baselines by a larger margin. ### 4.3 Natural Language Generation For natural language generation (NLG) tasks, we followed the setup of Prefix-Tuning (Li & Liang, 2021) and LoRA (Hu et al., 2021) on GPT-2 for a direct comparison with LoRA and other adaptation methods. We evaluated GPT-2 medium and large on the E2E NLG Challenge. Please see Appendix A for more details on the models and datasets we used. Implementation Details. Our implementation is based on the fine-tuning code for GPT-2 in Huggingface and Betty (Choe et al., 2022). Specifically, we load GPT-2 models with the code of Huggingface and build our bi-level optimization framework with Betty. Experimental Settings. In our method, the training set and validation set are used as the lower-level and upper-level datasets respectively, and we report performance on the test set. Singular values were parameterized as Softmax if not otherwise stated. We kept our experimental settings the same as LoRA. Specifically, we kept hyperparameters such as sequence length, batch size, LoRA rank, LoRA alpha, and label smoothing exactly the same as LoRA. These experimental settings allow for a fair comparison with LoRA and other adaptation methods. Main Results. Table 3 shows the results of GPT-2 medium/large on the E2E test set. Our method outperforms LoRA and other methods on all metrics for both GPT-2 M and GPT-2 L. The results demonstrate the effectiveness of our method in Natural Language Generation (NLG) downstream tasks and the generalization capabilities of our method across different models and task types. 4.4 ANALYSIS Scaling Up to DeBERTa-XXL. We use DeBERTa-v2-xxlarge(1.5B) to evaluate the scaling-up performance of our method. The study was performed on three datasets of the GLUE benchmark due to the constraint of computational resources for keeping the same experimental settings with LoRA. Results in Table 4 show that BiLoRA achieves better or on par performance compared with LoRA and full fine-tuning (FT), indicating that BiLoRA yields better generalization when applied to fine-tuning models with a very large number of parameters. Table 4: Experiment results for scaling up to DeBERTa-XXL (Dv2). In BiLoRA, the values of hyperparameters including LoRA rank, LoRA alpha, and max length are the same as those in LoRA. * indicates numbers published in prior works. | Method | params | MNLI | MRPC | CoLA | Avg. | |-----------------|--------|------|------|------|------| | Dv2(FT)* | 1500.0M| 91.8 | 92.0 | 72.0 | 85.3 | | Dv2(LoRA)* | 4.7M | 91.9 ±2 | 92.6 ±6 | 72.4 ±1.1 | 85.6 | | Dv2(BiLoRA) | 4.7M | 91.9 ±3 | 92.7 ±4 | 73.0 ±4 | 85.9 | Ablation Studies on Pseudo Singular Values. In Section 3.1, we introduced three ways to parameterize the pseudo singular values: Real Value, Softmax, and Approximately Binary. We conduct experiments separately using these three parameterization methods while keeping other experimental settings the same. We test RoBERTa’s performance on the GLUE dataset. Results in Table 5 show that the Softmax parameterization exhibits the best performance, with Approximately Binary coming in a close second. Softmax and Approximately Binary outperform Real Value because they yield positive values which meet the constraint that singular values need to be non-negative while Real Value does not. Approximately Binary performs slightly worse than Softmax since it imposes a stronger constraint that the values need to be close to zero or one. Such a constraint limits the expressivity of the parameterization. Another observation is that under all the three parameterization methods, BiLoRA outperforms LoRA, demonstrating that BiLoRA is robust against different ways of representing the pseudo singular values and thus does not require extensive tuning for selecting the best parameterization. Table 5: Experiment results on three different parameterizations of pseudo singular values: Real Value, Softmax, and Approximately Binary. | Method | MNLI | SST-2 | MRPC | CoLA | QNLI | QQP | RTE | STS-B | Avg. | |----------------|------|-------|------|------|------|-----|-----|-------|------| | Rs(LoRA) | 87.5 | 95.1 | 89.7 | 63.4 | 93.3 | 90.8| 86.6| 91.5 | 87.2 | | Rs(Real Value)| 87.5 | 94.6 | 91.7 | 63.6 | 93.0 | 90.8| 86.6| 91.3 | 87.4 | | Rs(Softmax) | 87.9 | 95.1 | 91.7 | 64.8 | 93.3 | 91.4| 87.2| 91.7 | 87.9 | | Rs(Binary) | 87.6 | 94.8 | 91.4 | 64.4 | 93.0 | 91.2| 86.6| 91.5 | 87.6 | Ablation Study on Orthogonality-Promoting Regularization. We investigated how the tradeoff parameter $\gamma_1$ associated with the orthogonality-promoting regularizer $R_1$ in Eq.(1) affects the performance of our method. The study was performed on RoBERTa-base. Results in Table 6 show that our method is robust against different values of $\gamma_1$, which implies that using our method does not need to extensively tune this hyperparameter. Table 6: Experiment results of RoBERTa_base ($R_b$) on GLUE, under different values of $\gamma_1$. | Method | MNLI | SST-2 | MRPC | CoLA | QNLI | QQP | RTE | STS-B | Avg. | |--------|------|-------|------|------|------|-----|-----|-------|------| | $R_b(\gamma_1 = 0.0)$ | 87.8 | 95.0 | 91.7 | **64.8** | 93.1 | **91.5** | 87.2 | **91.7** | **87.9** | | $R_b(\gamma_1 = 0.1)$ | **87.9** | **95.1** | 91.7 | **64.8** | **93.3** | 91.4 | 87.2 | **91.7** | **87.9** | | $R_b(\gamma_1 = 0.2)$ | 87.8 | 95.0 | **91.9** | 64.4 | 93.1 | 91.2 | 86.9 | 91.5 | 87.7 | | $R_b(\gamma_1 = 0.3)$ | 87.2 | 94.6 | 91.4 | 63.6 | 92.8 | 90.9 | **87.4** | 91.2 | 87.4 | Computation Costs. Table 7 shows the training time of LoRA and our method. The total training time of our method on the eight datasets is lower than that of LoRA. This arises from the fact that BiLoRA converges with much fewer training epochs than LoRA. In the Softmax parameterization of pseudo singular values, each value is initialized with a mean equal to $1/r$, larger than that in Real-Value, which increases the overall magnitude of $\Delta W$ and allows a larger learning rate for the training process. The bi-level optimization framework effectively accommodates this larger learning rate by iteratively optimizing between the two levels without affecting the training stability. With such a large learning rate, even though bi-level optimization takes longer time for each training step, it takes much fewer training steps for training low-rank adapters compared to LoRA, thus reducing the total training time. Table 7: Training time (minutes) of LoRA and BiLoRA on RoBERTa_base/large ($R_{b/l}$) and the GLUE benchmark. | Method | MNLI | SST-2 | MRPC | CoLA | QNLI | QQP | RTE | STS-B | Total. | |--------|------|-------|------|------|------|-----|-----|-------|--------| | $R_b$(LoRA) | 3190.7 | 1096.2 | **30.2** | **193.0** | 709.8 | 2464.3 | **55.5** | **62.4** | 7802.1 | | $R_b$(BiLoRA) | **1407.1** | **260.1** | 240.3 | 260.3 | **375.2** | **1732.6** | 97.5 | 158.3 | **4531.4** | | $R_l$(LoRA) | 789.7 | **133.9** | **14.7** | **34.1** | 209.1 | 1446.7 | 10.0 | **23.1** | 2661.3 | | $R_l$(BiLoRA) | **707.5** | 160.8 | 19.2 | 62.5 | **200.4** | **1166.7** | 4.4 | 43.3 | **2363.8** | The results in Table 1 and 4 jointly demonstrate that BiLoRA enhances training performance while reducing the overall training time. These results substantiate the effectiveness of our method. 5 CONCLUSION AND FUTURE WORK We propose BiLoRA, a novel and general bi-level optimization framework for further enhancing the performance of low-rank adapters through addressing the overfitting issue in LoRA and its variants. By utilizing the SVD parameterization form of low-rank incremental matrices, our method separately trains pseudo singular vectors and singular values on different datasets in two different optimization levels. Such a method effectively alleviates overfitting and enhances the performance of low-rank incremental matrices while reducing the total training time. Results of extensive experiments on various NLU and NLG tasks and different large pre-trained models show that our method achieves notable performance improvements over existing adaptation methods. Our method opens up several potential directions for future research: 1) The parameterization form of pseudo singular values can be further developed to support automated rank selection. 2) Our bi-level optimization framework enhances the generalization capability of fine-tuned models, which encourages further in-depth theoretical analysis in this regard. REFERENCES Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. *arXiv preprint arXiv:2012.13255*, 2020. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. *arXiv preprint arXiv:1708.00055*, 2017. Sang Keun Choe, Willie Neiswanger, Pengtao Xie, and Eric Xing. Betty: An automatic differentiation library for multilevel optimization. *arXiv preprint arXiv:2207.02849*, 2022. Hua Cui and Jie Bai. A new hyperparameters optimization method for convolutional neural networks. *Pattern Recognition Letters*, 125:828–834, 2019. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. *arXiv preprint arXiv:2305.14314*, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. Parameter-efficient fine-tuning of large-scale pre-trained language models. *Nature Machine Intelligence*, 5(3):220–235, 2023. Bill Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In *Third International Workshop on Paraphrasing (IWP2005)*, 2005. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *International conference on machine learning*, pp. 1126–1135. PMLR, 2017. Luca Franceschi, Michele Donini, Paolo Frasconi, and Massimiliano Pontil. Forward and reverse gradient-based hyperparameter optimization. In *International Conference on Machine Learning*, pp. 1165–1173. PMLR, 2017. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: Decoding-enhanced bert with disentangled attention. *arXiv preprint arXiv:2006.03654*, 2020. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*, pp. 2790–2799. PMLR, 2019. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. *arXiv preprint arXiv:2106.09685*, 2021. Chengsong Huang, Qian Liu, Bill Yuchen Lin, Tianyu Pang, Chao Du, and Min Lin. Lorahub: Efficient cross-task generalization via dynamic lora composition. *arXiv preprint arXiv:2307.13269*, 2023. Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. Compacter: Efficient low-rank hypercomplex adapter layers. *Advances in Neural Information Processing Systems*, 34:1022–1035, 2021. Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. Measuring the intrinsic dimension of objective landscapes. *arXiv preprint arXiv:1804.08838*, 2018. Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. *arXiv preprint arXiv:2101.00190*, 2021.
ox2ATRM90I
In Section 4 Preprocessing, for the aggregated features, what are the time windows for aggregation and the frequencies that you update those features? Is there any specific consideration in the choice for them?
Yet Another ICU Benchmark: A Flexible Multi-Center Framework for Clinical ML Robin van de Water¹, Hendrik Schmidt¹, Paul Elbers¹, Patrick Thoral², Bert Arnrich¹, Patrick Rockenschaub³ ¹Hasso Plattner Institute, University of Potsdam, Germany ²Amsterdam UMC, Vrije Universiteit, Amsterdam, The Netherlands ³Lab for AI in Medicine, Charité - Universitätsmedizin Berlin, Germany Abstract Medical applications of machine learning (ML) have experienced a surge in popularity in recent years. The intensive care unit (ICU) is a natural habitat for ML given the abundance of available data from electronic health records. Models have been proposed to address numerous ICU prediction tasks like the early detection of complications. While authors frequently report state-of-the-art performance, it is challenging to verify claims of superiority. Datasets and code are often not published, and cohort definitions, preprocessing pipelines, and training setups are difficult to reproduce. This work introduces Yet Another ICU Benchmark (YAIB), a modular framework that allows researchers to define reproducible and comparable clinical ML experiments; we offer an end-to-end solution from cohort definition to model evaluation. The framework natively supports most open-access ICU datasets (MIMIC III/IV, eICU, HiRID, AUMCdb) and is easily adaptable to future and custom ICU datasets. Combined with a transparent preprocessing pipeline and extensible training code for multiple ML and deep learning models, YAIB enables unified model development, transfer, and evaluation. Our benchmark comes with five predefined established prediction tasks (mortality, acute kidney injury, sepsis, kidney function, and length of stay) developed in collaboration with clinicians. Adding further tasks is straightforward by design. Using YAIB, we demonstrate that the choice of dataset, cohort definition, and preprocessing have a major impact on the prediction performance, often more so than model class, indicating an urgent need for YAIB as a holistic benchmarking tool. We provide our work to the clinical ML community to accelerate method development and enable real-world implementations. Software Repository: https://github.com/rvandewater/YAIB 1 Introduction The intensive care unit (ICU) has long been a focus for research into data-driven decision support, owing to the impact of medical decisions as well as the breadth and depth of data collected in this setting (Johnson et al., 2017). The COVID-19 pandemic confirmed the need for reliable machine learning (ML)-based clinical decision support that can alert healthcare professionals to worsening patient states, help them make a clinical diagnosis, or recommend treatment (Medic et al., 2019). Despite a steep increase in the number of published ICU prediction models (Shillan et al., 2019), hardly any have made their way into clinical practice (Eini-Porat et al., 2022; Fleuren et al., 2020b). A major obstacle to translation is an ongoing lack of comparability and reproducibility (Johnson et al., 2017). By using custom datasets and definitions, preprocessing pipelines, and evaluation schemes, the benefits of novel models are conflated with differences between patient case mix, task definitions, and cohort selection (Sarwar et al., 2023; Kelly et al., 2019). Reviewing models for early prediction of sepsis, for example, Moor et al. (2021b) found that the definition of sepsis, the time of prediction, and the available features differed substantially between the 22 included studies; similar results were found in an earlier review (Fleuren et al., 2020a). Even among studies from the same research... group (Hyland, 2020; Yéche et al., 2022), cohort definitions may vary substantially, precluding a meaningful comparison. Inconsistencies in imputation and feature extraction further complicate an objective evaluation of research progress. The increasing availability of open-access ICU datasets is a first, important step towards urgently needed model comparability (Sauer et al., 2022a). However, models derived from the same dataset may still vary considerably in their analytical setup. Earlier work has therefore created benchmarks that establish a single pipeline for preprocessing and modeling (Yéche et al., 2022; Harutyunyan et al., 2019). These benchmarks are hard-coded for a given dataset, following proprietary formats and supporting a limited, fixed set of tasks. Extending an existing benchmark to include new datasets or tasks requires changes to the benchmark’s — often lightly documented — source code. Despite the existence of multiple benchmarks, new models are therefore rarely evaluated on more than one dataset or do not use any benchmark (Shillan et al., 2019). We address this gap by providing Yet Another ICU Benchmark (YAIB) as a modular multi-dataset framework specifically designed for extensibility. Building on recent work to harmonize ICU data (Bennett et al., 2023) (i.e., match time-scale, clinical definitions, and units across datasets), we standardize the entire modeling workflow from the definition of clinical concepts (a medical abstraction to facilitate patient care) and data extraction to model fitting and evaluation across several established open-source ICU datasets (Sauer et al., 2022a). We provide a predefined set of common prediction tasks, developed in collaboration with clinical intensivists, that can be easily extended to fit user needs. Our benchmark, by default, provides endpoint prediction for ICU mortality, sepsis (Singer et al., 2016), acute kidney injury (AKI) (KDIGO, 2012), kidney function (KF), and length of stay (LoS). With this work, we aim to (1) dramatically reduce the overhead of developing new ICU prediction methods, (2) provide a transparent, open-source, and reproducible definition of experiments, and (3) unify ML workflows for ICU prediction modeling. 2 RELATED WORK Our work builds upon several previous efforts to harmonize the definition, development, and evaluation of ICU prediction models. YAIB combines these existing works in a novel, end-to-end fashion to enable quick, reproducible, and comparable model development. Publicly available ICU datasets Our benchmark currently supports four established ICU datasets (Sauer et al., 2022b): the Medical Information Mart for Intensive Care (MIMIC) version III (Johnson et al., 2016) and IV (Johnson et al., 2023), the eICU Collaborative Research Database (eICU) (Pollard et al., 2018), the High Time Resolution ICU Dataset (HiRID) (Hyland, 2020), and the AmsterdamUMCdb (AUMCdb) (Thoral et al., 2021). These datasets contain similar data items but differ in size and scope (Table 13). Together, they cover 334,812 ICU stays. We plan to integrate two recently released ICU datasets in the future (Rodemund et al., 2023; Jin et al., 2023). Benchmarks To improve comparability between models trained on these ICU datasets, several benchmarks or benchmark-like applications have been developed (Table 1). These solutions mainly differ in the tasks and models they support. Notably, existing benchmarks heavily focus on benchmarking results, often hardcoding key steps like data extraction, task definition, preprocessing, feature generation, and sometimes model training. While they may reduce implementation overhead when evaluating new ML approaches, present benchmarks are difficult to adapt to user requirements. Core code base changes are often necessary if the users’ problems do not fit into the provided task definitions. Even advanced modeling frameworks such as Jarrett et al. (2021) and Saveliev & van der Schaar (2023) share this weakness, as they do not support reproducible data extraction or task definitions; thus, they do not provide an end-to-end solution like YAIB. Multi-dataset support Due to considerable heterogeneity in data structure, existing benchmarks tend to focus on a single dataset, most frequently MIMIC-III. As MIMIC-III also has a large existing user base (Syed et al., 2021), it thus often becomes the default choice (Shillan et al., 2019). This has potentially resulted in a self-enforcing bias towards the MIMIC-III datasets, which represent a single-center US population. Even frameworks that work with its successor MIMIC-IV lack backward compatibility (Mandyam et al., 2021; Gupta et al., 2022). Among the few multi-dataset solutions, Tang et al. (2020) operates on both eICU and MIMIC-III, but lacks many of the model architectures found in others works and does no longer appear to be in active development. Oliver et al. (2023) provides a hardcoded pipeline to combine several datasets without providing cohort definitions, Table 1: Comparison of existing benchmarks and YAIB on ICU data, ordered by publication date. | Datasets | Johnson et al. | Furnishedthum et al. | Harutyunyan et al. | Barbieri et al. | Wang et al. | Juret et al. | Sheikhalishahi et al. | Tang et al. | Yechi et al. | Mandyam et al. | Gupta et al. | Yang et al. | Saveliev et al. | Oliver et al. | YAIB (ours) | |-------------------|----------------|----------------------|--------------------|-----------------|-------------|--------------|-----------------------|------------|--------------|----------------|--------------|-------------|----------------|--------------|-----------| | MIMIC-III | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | MIMIC-IV | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | eICU | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | HiRID | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | AUMCdb | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Mortality risk | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Circulatory failure | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Kidney function (KF) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Respiratory failure | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Sepsis | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Acute kidney injury | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Phenotyping | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Interventions | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Length of stay (LoS) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Readmission | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Prediction tasks Preproc: Model architectures Code available Dataset interoperability *: These tasks are not included by default but may be easily added through our cohort definition pipeline. §: Due to lack of recorded database information, these tasks can only be defined for MIMIC III and IV. †: Interface and extensive instructions to add interoperable modules following a provided abstraction (datasets, prediction tasks, models) and adjust existing modules without extensive rewriting or refactoring. ‡: Provides an uncoupled interoperable dataset definition, allowing a.o. transfer learning and domain adaption. benchmarking, or an end-to-end pipeline. Finally, Yang et al. (2023) recently proposed PyHealth as a comprehensive deep learning toolkit for both ML researchers and healthcare practitioners; it is perhaps most closely related to our work. Unfortunately, PyHealth only supports subsets of the full datasets, and tasks must be defined anew for each dataset. It also does not currently include time series or ways to deal with missing data, limiting its use for novel clinical or ML developments. 3 Benchmark Design YAIB addresses the issues identified above and provides a unified interface to develop clinical prediction models for the ICU. An experiment in YAIB consists of four steps: 1) define clinical concepts from the raw data; 2) extract the patient cohort and specify the prediction task; 3) preprocess the data and generate features; and 4) train and evaluate the ML models (Figure 1). 3.1 Design Philosophy We strongly believe that medical research is inherently complex and that — rather than providing a rigid benchmark — there lies most value in providing a modular setup where the user can exchange any part with something that better suits their needs and, importantly, do so reproducibly. For example, users frequently want to highlight a particular aspect of their model, prompting them to adapt the default tasks. Changes, however minor, can render results incomparable. We, therefore, prioritized extensibility across the entire experiment lifecycle. This high level of extensibility may increase the complexity of our benchmark. We mitigate this by providing a range of default experiments for users with limited access to medical expertise or who are content with a fixed set of medical tasks. The experiments were designed to be directly comparable and provide a common benchmark. This allows for a standardized evaluation of models similar to existing benchmarks but still benefits from out-of-the-box support for multiple datasets and easy adaptability if need be. While we did our best to ensure extensibility, YAIB cannot currently support all possible use cases. Specialized use cases like federated learning or reinforcement learning currently require custom code. However, we keep adding functionality to YAIB, and users may nevertheless benefit from using parts of our framework. We provide detailed documentation on how to implement any extensions (Appendix F). We strongly request users of YAIB to provide their code and a detailed list of the changes they have made to the repository to accurately and transparently provide results for their experiments. 3.2 Clinical Concepts We ensured that our benchmark supports existing and future ICU datasets. Working with multiple datasets requires careful data harmonization, as datasets are collected in different locations, with different clinical recording, and may have completely different data structures. We use the ricu (Bennett et al., 2023) R package to bring datasets into a common, semantically interoperable format. This harmonization relies on two things: 1) a common temporal reference point and 2) a dataset-independent definition of clinical concepts. ricu by default distinguishes measurements recorded for a patient, a hospital admission, or an ICU admission, and supports conversion between these levels of measurement. Through definition of reference points, it facilitates temporal comparability between datasets. ricu also allows defining clinical concepts such as heart rate or SOFA score independently of any particular dataset, specifying their meaning, plausible min/max ranges, and units of measurement. A concept can be enabled for a dataset by specifying how it should be extracted from the data, for example, by selecting an entire column or subsetting a table based on an item identifier. ricu thus acts as an interface to the raw data (stored in a fast, compressed column format), on command returning the data for a concept in a table of ID-time-value pairs. This is still no panacea to make ICU datasets immediately interoperable, but it provides a helpful framework for harmonization. For users unfamiliar with R, we provide an interface to access ricu concepts directly from Python. PyICU, a native Python implementation of ricu, is in development. 3.3 Patient Cohort and Task Definition Once in a common format, the same task definition can be applied across datasets. This facilitates code reuse and eliminates opportunities for error. Even so, care must be taken to combine clinical concepts, define meaningful prediction targets, and apply appropriate exclusion criteria. We provide default workflows and helper functions to support this process, including a transparent pipeline for applying exclusion criteria and reporting patient attrition. We supplied this functionality in a Table 2: Prediction task overview. Note that the related work is non-exhaustive. | No | Task | Frequency | Type | Related work | |----|---------------|-----------------|------|-------------------------------------------------------------------------------| | 1 | Mortality | Once per stay* | C | Baker et al. [2020], Lu et al. [2021], Medic et al. [2019], Sharma et al. [2017], Syed et al. [2021] | | 2 | AKI | Hourly | C | Huang et al. [2021], Nikkinen et al. [2022], Pan et al. [2019], Rank et al. [2020], Shamout et al. [2021] | | 3 | Sepsis | Hourly | C | Kok et al. [2020], Laurisen et al. [2020], Merath et al. [2020], Fleuren et al. [2020b], Moor et al. [2021a] | | 4 | KF | Once per stay* | R | Muruthiran et al. [2020], Reyna et al. [2019], Shamour et al. [2021], Wang et al. [2021] | | 5 | LoS | Hourly | R | Tomašev et al. [2019], Futoma et al. [2016], Perotte et al. [2015], Cheng et al. [2018], Guo et al. [2020] | C: Classification, R: Regression. * Using data from 0-24 hours. standalone repository facilitate its use with other modeling frameworks such as Clairvoyance (Jarrett et al. [2021]). The specification of our adaptive and re-definable pipeline is found in Appendix D. 3.4 Preprocessing and Feature Extraction Further preprocessing is often required at runtime, including data normalization, generation of missingness indicators, and imputation. We provide a transparent, flexible way for users to define their preprocessing pipeline (also available as a standalone package), including default implementations of historical aggregation (e.g., mean or variance), resampling of the time resolution, imputation methods, and a wrapper for any Scikit-learn (Pedregosa et al. [2011]) preprocessing step. Custom steps can be added by subtyping an abstracted step interface or providing a callable object to a generic step. 3.5 Training and Evaluation A single YAIB experiment creates and optimizes a model for a given task and preprocessing pipeline. Experiments are defined using the gin-config library (Dan Holtmann-Rice et al. [2018]) in simple Python-like text files. The model configuration defines the model architecture and contains information on hyperparameters and optimizers. Every aspect of a model is fully configurable. The task configuration defines the target, the data source, the features, and the preprocessing. Additionally, one can define the cross-validation splits and the number of iterations. By defining the model and task separately, they can be mixed and matched, training the same architecture for multiple tasks or training multiple models for a single task. We provide details for adding new datasets, preprocessing, models, and an example of sepsis prediction in Appendix E. Training is supervised by PyTorch Lightning (Falcon & team [2023]), which uses standardized training and logging, GPU parallelism, and advanced debugging. Users can configure hyperparameter ranges and sampling methods for model optimization. A Gaussian Process is fit to the hyperparameters using scikit-optimize (Head et al. [2021]) as a robust alternative to random search ( Snoek et al. [2012]). Result tracking Results are automatically aggregated and written to a JSON file, in addition to optional Tensorboard (Abadi et al. [2016]), PyTorch Lighting (Falcon & team [2023]), and WandB (Biewald [2020]) logging for easy experiment tracking. Performance evaluation records widely-used metrics out of the box (AUROC, AUPRC, calibration curve, accuracy, loss) and supports multiple evaluation libraries: TorchMetrics (Nicki Skafte Detlefsen et al. [2022]), Pytorch-Ignite (Fomin et al. [2020]), and Scikit-Learn (Pedregosa et al. [2011]) metrics. New metrics, either developed by the user or from existing libraries, can be easily added (see Appendix F.6). 4 Experiments We ran experiments for five common prediction tasks: ICU mortality, onset of acute kidney injury (AKI), onset of sepsis, kidney function (KF) on day 2, and remaining length of stay (LoS) (Table 2). Mortality and KF used data from 0-24 hours. All other task used all available data until the event or discharge. We ensured adequate data quality by excluding: 1) patients younger than 18 years; 2) stays with missing discharge times; 3) stays with less than six hours in the ICU; 4) stays with measurements in less than four time bins; and 5) stays with no measurement for more than 12 consecutive hours in the ICU. We also applied task-specific exclusion criteria. For example, we excluded stays of less than 30 hours for the ICU mortality task, as this could introduce causal leakage from patients already dead or about to die at the time of prediction. For each task, we included 52 features, of which 4 were static and 48 were time series. Various additional features, including prescriptions and diagnoses, can be directly used in YAIB by adjusting the cohort generation module (YAIB-cohorts); if features are not available, their implementation is straightforward (Appendix F). Information on the datasets, features, and individual cohort definitions can be found in Appendix C and D. The code to define these cohorts... is publicly available. In addition to the baseline performance for each task, dataset, and model, we used YaIB to investigate the effects of small variations in task definitions on predictive performance — a common obstacle to model comparability (Moor et al., 2021b; Fleuren et al., 2020b). Specifically, we i) only excluded stays of less than 24 hours to assess the effects of causal leakage by aligning our mortality task with (Yèche et al., 2022), ii) omitted static and dynamic historical features (i.e., min, max, count, mean) to simulate access to fewer input data, and iii) compared alternative definitions for sepsis. We, additionally, evaluated transfer learning with the harmonized datasets (iv). Preprocessing 1. Scaling: The data was scaled to zero mean and unit variance. 2. Imputation: After adding missing indicators, we forward-filled all columns for the dynamic data, replacing missing values with the last known values of the same stay. Missing values without a prior measurement were filled with the sample mean. To prevent data leakage, we used the mean of the train split as the sample mean for all splits. 3. Feature generation: We generated the min, max, mean, and count of measurements for each feature in the dynamic data. We only applied this step for the conventional ML models, e.g., Light Gradient Boosting Machine (LGBM), as they cannot capture sequential information natively. 4.1 Models and experimental setup We considered a range of algorithms used in previous benchmarks (Table 1) and applied work (Hyland, 2020; Pirracchio et al., 2015; Silva et al., 2012; Syed et al., 2021), including regularized logistic regression (LR) and elastic net (EN) (used for classification and regression, respectively (Pedregosa et al., 2011)), LGBM (Ke et al., 2017), and four variations of neural networks: Gated Recurrent Unit (GRU) (Cho et al., 2014), Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997), Temporal Convolutional Network (TCN) (Bai et al., 2018) and transformer (TF) (Vaswani et al., 2017). LR, EN, and LGBM were used with the feature generation described above, as they are unable to utilize time series. The implementation of neural networks was adapted from (Yèche et al., 2022). For our experiments, unless stated otherwise, we used 5 iterations of 5-fold cross-validation. Hyperparameters were tuned on the training set using 30/50 (DL/ML, respectively) iterations of Bayesian hyperparameter optimization (Snoek et al., 2012). For computational reasons, hyperparameter tuning used only the first 2/3 folds, respectively (see Appendix H for a definition of all searched and selected hyperparameters). The final validation of the best hyperparameters used all 5 folds. Each model was optimized for a maximum of 1000 epochs. Training was stopped early if performance on the validation set did not improve for 10 epochs. The epoch with the best performance on the validation set was retained and evaluated on the test set. This process was repeated for 5 iterations, after which the results were averaged, and the standard deviation was calculated. 4.2 Benchmarking baseline models on major ICU datasets Baseline results for all tasks can be found in Table 3 and 4. Note that we have also benchmarked our tasks for two openly available demo datasets from MIMIC-III and eICU; these can be directly accessed without completing a credentialing procedure (see Table 11 and 12). ICU mortality The performance of traditional ML and DL models was highly comparable among each other and across datasets when predicting mortality based on data from the first 24 hours. Notably, AUPRC was higher in AUMDb due to a higher outcome prevalence (Table 13). Acute kidney injury (AKI) Maximum achievable performance was also similar across datasets when predicting the hourly onset of AKI, with the notable exception of HiRID, which had both lower AUROC and AUPRC for all models. GRU models consistently achieved the best performance. Sepsis The performance of baseline models was worst for the hourly onset of sepsis, both for AUROC and especially AUPRC. This may be explained by the particularly low prevalence of ~1% hourly bins classified as septic and the relative difficulty of predicting sepsis in general (Moor et al., 2021b). Kidney function (KF) Classical ML models achieved relatively good performance for this task, which may reflect the dependence of KF on a limited number of features (Grinsztajn et al., 2022). Remaining length of stay (LoS) The performance of ML and DL models was also comparable across datasets. Nevertheless, predicting the length of stay seems difficult, given that the average MAE is almost two days. Transformers consistently outperformed most other model types. Table 3: Baseline performance on the classification tasks. We embolden the best mean AUROC × 100 (↑, i.e., higher is better) and AUPRC × 100 (†) per dataset and those within a standard deviation (±). | Algorithm | AUMCdb | HiRID | eICU | MIMIC-IV | |-----------|--------|-------|------|----------| | | AUROC | AUPRC | AUROC | AUPRC | | Mortality | | | | | | LR | 83.7±0.6 | 52.9±1.2 | 84.0±0.3 | 36.9±1.1 | 84.8±0.2 | 33.0±0.7 | 86.1±0.1 | 39.7±0.6 | | LGBM | **84.5±0.5** | 53.7±1.2 | 84.4±0.3 | 40.6±0.8 | **85.7±0.2** | 36.0±0.6 | **87.7±0.2** | 44.2±0.7 | | GRU | 83.9±0.3 | 53.8±0.7 | **84.8±0.2** | 39.4±0.4 | **86.0±0.1** | 35.6±0.1 | **87.6±0.1** | 42.8±0.3 | | LSTM | 83.7±0.7 | 53.6±1.4 | 84.0±0.7 | 37.8±1.0 | 85.5±0.2 | 35.7±0.8 | 86.7±0.4 | 41.0±0.7 | | TCN | **84.0±0.6** | 54.2±1.4 | **84.6±0.7** | 39.2±1.3 | 85.4±0.2 | 34.3±0.6 | 87.1±0.3 | 41.4±0.8 | | TF | 84.1±0.2 | 54.4±1.1 | **84.9±0.7** | 39.3±1.5 | **85.9±0.2** | 34.7±0.8 | 86.9±0.3 | 42.2±0.3 | | AKI | | | | | | LR | 85.5±0.3 | 45.1±0.4 | 79.6±0.1 | 31.8±0.8 | 72.8±0.1 | 32.2±0.2 | 77.1±0.2 | 37.7±0.3 | | LGBM | 85.8±0.3 | 48.4±0.6 | 80.2±0.2 | 32.8±0.4 | 84.6±0.1 | 50.8±0.2 | 83.8±0.1 | 53.3±0.2 | | GRU | **90.6±0.3** | 52.8±0.7 | **82.2±0.2** | 33.9±0.4 | **90.9±0.0** | 72.2±0.1 | **90.7±0.1** | 69.6±0.2 | | LSTM | 86.5±0.4 | 40.6±0.6 | 81.0±0.4 | 31.8±0.4 | 90.2±0.1 | 69.9±0.2 | 89.7±0.1 | 66.5±0.2 | | TCN | 89.6±0.2 | 50.0±0.9 | 81.2±0.2 | 32.3±0.4 | 90.4±0.0 | 70.4±0.2 | 89.8±0.1 | 66.8±0.2 | | TF | 88.2±0.2 | 48.2±0.7 | 81.5±0.2 | 33.4±0.5 | 89.9±0.1 | 68.0±0.3 | 89.6±0.1 | 65.6±0.2 | | Sepsis | | | | | | LR | 74.7±1.0 | 4.0±0.4 | 76.5±0.6 | 8.4±0.3 | 71.8±0.3 | 2.9±0.1 | 77.1±0.4 | 4.6±0.1 | | LGBM | 74.0±0.8 | 5.2±0.7 | 76.1±0.4 | 10.4±0.5 | 69.1±0.3 | 3.3±0.1 | 77.5±0.3 | 5.9±0.2 | | GRU | 79.7±0.9 | 7.7±0.7 | **80.6±0.5** | 12.6±0.5 | **77.4±0.2** | 5.1±0.1 | **83.6±0.3** | 9.1±0.3 | | LSTM | 77.1±0.8 | 6.4±0.5 | 78.8±0.4 | 11.1±0.5 | 74.0±0.2 | 4.0±0.1 | 82.0±0.3 | 8.0±0.2 | | TCN | 78.7±0.7 | 7.1±0.6 | **80.8±0.5** | 13.0±0.4 | 76.7±0.1 | 4.9±0.1 | 82.7±0.3 | 8.8±0.2 | | TF | **80.7±0.9** | 8.6±0.8 | **80.8±0.3** | 12.6±0.6 | 76.2±0.1 | 4.6±0.1 | 80.0±0.8 | 6.6±0.2 | Table 4: Baseline performance on the regression tasks. Results are reported in Mean Absolute Error (↓) | Algo. | Kidney function in mg/dL | Length of Stay in hours | |-------|--------------------------|-------------------------| | | AUMCdb | HiRID | eICU | MIMIC-IV | AUMCdb | HiRID | eICU | MIMIC-IV | | EN | **0.24±0.00** | 0.28±0.00 | 0.31±0.00 | 0.25±0.00 | 54.9±0.0 | 47.2±0.1 | 43.6±0.0 | 46.5±0.0 | | LGBM | 0.32±0.00 | 0.34±0.00 | **0.29±0.00** | **0.24±0.00** | 44.7±0.0 | **39.2±0.1** | 39.3±0.0 | 40.1±0.0 | | GRU | 0.29±0.00 | 0.32±0.01 | 0.34±0.01 | 0.30±0.01 | 42.9±0.1 | 39.6±0.1 | 38.9±0.1 | 39.9±0.1 | | LSTM | 0.29±0.00 | 0.33±0.00 | **0.28±0.01** | 0.28±0.01 | 44.8±0.1 | 39.8±0.1 | 39.2±0.1 | 40.6±0.1 | | TCN | 0.28±0.01 | **0.23±0.01** | 0.31±0.00 | 0.28±0.01 | 43.7±0.1 | 39.9±0.1 | 38.9±0.0 | 40.4±0.1 | | TF | 0.26±0.00 | 0.31±0.01 | 0.33±0.01 | 0.32±0.01 | **41.8±0.1** | **39.1±0.1** | **38.2±0.1** | **39.0±0.1** | We provide the average and Interquartile range for Kidney Function and Length of Stay in Table 4. 4.3 Using YAIB as an experimental ML framework Changing exclusion criteria for mortality cohorts As hypothesized, the choice of exclusion criteria could majorly impact achievable prediction performance (Table 5). Compared to the peak performance achieved with the HiRID-benchmark (Yeche et al., 2022), our baseline performance for the mortality task was noticeably lower. Aligning the exclusion criteria accounted for half of the performance difference. The remaining difference was likely due to the inclusion of additional predictors — most notably drug usage — in the HiRID-benchmark. This highlights the difficulties of comparing works that ostensibly address the same task, even using the same dataset and model implementation. Restricting input features We observed that dynamic feature generation consistently outperformed task definitions that did not include them (Table 7 and 8). LR on MIMIC-IV showed a considerable performance gap, whereas AUMCdb remained stable. We noted a performance decrease that ranges between 4.0% and 19.1% for LR and between 5.2% and 13.1% for LGBM. Omitting static features led to minor drops in performance (Table 9 and 10); averaged across datasets, we observe a performance differences ranging between 0.5% and 0.2% for the transformer model. Comparing sepsis definitions Label definitions also had a considerable impact on AUROC and/or AUPRC (Table 6), which was not always apparent from the definition alone. Sepsis has been defined in several ways (Fleuren et al., 2020b), mainly because a clinical gold standard that can be transferred... Table 5: ICU mortality prediction on HiRID with (>24h) and without (>30h) possibility of causal leakage. | Algorithm | w/o leakage | w/ leakage | Yèche et al. (2022) | |-----------|-------------|------------|---------------------| | | AUROC | AUPRC | AUROC | AUPRC | AUROC | AUPRC | | LR | 84.0±0.3 | 36.9±1.1 | 87.2±0.4 | 43.1±1.3 | 89.0±0.0 | 58.1±0.0 | | LGBM | 84.5±0.3 | 40.6±0.9 | 87.9±0.5 | 47.7±1.2 | 88.8±0.2 | 54.6±0.8 | | GRU | 84.8±0.2 | 39.4±0.4 | 88.2±0.3 | 46.1±1.2 | 90.0±0.4 | 60.3±1.6 | | TCN | 84.6±0.7 | 39.2±1.3 | 87.8±0.2 | 45.2±1.0 | 89.7±0.4 | 60.2±1.1 | | TF | 84.9±0.7 | 39.4±1.5 | 88.2±0.3 | 47.1±1.2 | 90.8±0.2 | 61.0±0.8 | Table 6: Sepsis prediction on MIMIC-IV for different definitions of sepsis. | Sepsis definition | Seymour et al. (2016)* | Moor et al. (2021a) | Calvert et al. (2016) | |-------------------|------------------------|---------------------|-----------------------| | | AUROC | AUPRC | AUROC | AUPRC | AUROC | AUPRC | | LGBM | 75.9±0.2 | 4.3±0.0 | 72.4±0.0 | 10.5±0.0 | 62.2±0.2 | 1.8±0.0 | | GRU | 79.2±0.1 | 6.1±0.0 | 80.9±0.0 | 17.7±0.0 | 89.2±0.0 | 9.3±0.2 | * Our definition; adapted to be more clinically actionable, see Appendix D to ML models is currently lacking. Our sepsis definition (adapted from Seymour et al. (2016), see Appendix D) can be considered closely related to that used by Moor et al. (2021a), who implement a variant of Sepsis-3 (Singer et al., 2016). However, we required that antibiotics were administered continuously for ≥ 3 days (Reyna et al., 2019). We judged that this would increase the clinical usability of the task but found that it also severely reduced the achievable AUPRC — likely due to a much lower prevalence (Table 1). The definition used by Calvert et al. (2016) on the other hand adapted Sepsis-2 (Levy et al., 2003), which differs fundamentally from Sepsis-3 and resulted in a notably higher AUROC (Engoren et al., 2020). This highlights the importance of precise cohort definitions, as some definitions may, by design, be more difficult to predict. 4.4 Transfer Learning External validation YAIB’s common dataset format allowed us to evaluate a model trained on an equal sample of one dataset on data from all other datasets. We additionally trained a model on pooled (d-1) data from three datasets and evaluated on the fourth, held-out dataset. For the ICU mortality task (Figure 2), models, as expected, performed best on independent test data from their training dataset (diagonal). Performance could drop considerably when models were evaluated in another database (off-diagonal). Notably, AUPRC performance could increase in the evaluation dataset (rows) but always remained lower than the highest achievable performance for that dataset (columns). We found that MIMIC-IV and eICU transferred well among each other. The pooled model usually performed as well as the best single-dataset model. Notably, AUMCdb AUPRC results demonstrate decidedly Figure 2: Performance of prediction models when trained on one dataset (row) and evaluated on all others (columns). Left: Performance in AUROC of the GRU model on ICU mortality. Right: Performance in AUPRC for the same models. Pooled (d-1) refers to training a model on every dataset except the evaluation dataset. higher performance than evaluation on other datasets, which could be the result of a patient case mix and outcome prevalence (see Table 14). **Fine-tuning** In Figure 2, we saw that eICU resulted in the most generalizable model for ICU mortality, which may serve as a strong pre-training for transfer learning. Since it worked worst for HiRID, we further fine-tuned the eICU GRU model (source) for HiRID (target) by retraining it using an increasing number of samples from the HiRID dataset. We compared the results to a model trained from scratch on the same amount of HiRID samples (Figure 3). Fine-tuning was profitable for any number of additional samples and especially for <4,000 samples. ### 5 DISCUSSION We provide extensive ML and DL baselines for five clinical prediction tasks trained across four major open-source ICU datasets. While we frequently obtained comparable results across model architectures, seemingly small differences in cohort definition could substantially impact the achieved accuracy. Our findings highlight not only the need for standardized training pipelines but also for harmonized cohort definitions to allow for a meaningful comparison of clinical prediction models. Our work provides the first international, multi-center ICU benchmark, including the first-ever benchmark for the AmsterdamUMCdb dataset. It naturally facilitates sorely needed external validation of model performances and allows fine-tuning of pre-trained models for new datasets. This makes YAIB relevant to a wide range of research areas beyond classical supervised learning, including domain adaption and generalization. We hope this broad reach encourages ICU data providers to ensure compatibility with YAIB, as they can expect a larger overall research impact. This simplifies the use of novel datasets by the clinical and ML community. YAIB aids researchers in training baseline models by providing them with ready-to-use implementations of state-of-the-art model architectures; new model implementations can therefore be easily compared. While most existing benchmarking studies are hard-coded, we utilize flexible, dataset-independent cohort definitions and configurable preprocessing facilities linked via a common, shareable syntax. This setup acknowledges that task definitions inevitably involve arbitrary decisions, without one “size” that fits all. In our work, we embrace this idea and aim to equip researchers — both applied and theoretical — with the tools to quickly adapt a task to their individual needs (including the use of custom proprietary data) while maintaining reproducibility and reusability across studies. Models can thus be compared across multiple, slightly different task definitions and datasets, still ensuring an apples-to-apples comparison. We hope this lowers the bar for researchers to test their approaches across a range of configurations and datasets. YAIB is currently limited to ICU settings, where several datasets are publicly available. A similar setup could be beneficial for data from other medical settings, such as inpatient wards. Although created for critical care, YAIB is not specific to the ICU and can be readily extended to other settings, provided a suitable configuration is defined. Features included in YAIB, at the time of writing, mainly relate to vital signs, lab tests, and data relevant to outcome definitions. Further clinician-assisted harmonization efforts will be necessary to increase the breadth of features, most notably medications and comorbidities. If YAIB is adapted to general EHR, including clinical notes and medical imaging is a logical next step. We also note that we compared these models on the basis of commonly used ML metrics; we leave the comparison with respect to clinical fairness and bias as an easy future extension to our framework (see Appendix F). Finally, we advise users of our benchmark to carefully consider the compromises made to allow for cohort harmonization; we strongly recommend clinical validation before making practical decisions based on the developed models. ### 6 CONCLUSION Routine medical data is highly complex. Without clear ground truth, researchers are inevitably forced to make arbitrary design choices when defining outcomes and populations of interest. To promote comparable and reproducible models in this setting, we believe that further tools are needed that allow researchers to define clinical prediction tasks transparently, share experimental setups easily, and validate results against various data sources. As a flexible and extensible framework for clinical modeling on ICU data, YAIB is meant to be a step towards that goal. 7 ACKNOWLEDGEMENTS Robin van de Water is funded by the “Gemeinsamer Bundesausschuss (G-BA) Innovationsausschuss” in the framework of “CASSANDRA - Clinical ASSist AND aleRt Algorithms” (project number 01VSF20015). We would like to acknowledge the work of Alisher Turubayev, Anna Shopova, Fabian Lange, Mahmut Kamalak, Paul Mattes, and Victoria Ayvasky for adding Pytorch Lightning, Weights and Biases compatibility, and several optional imputation methods to a later version of the benchmark repository. 8 ETHICS STATEMENT We do not manage access and do not provide access to any of the full medical datasets included in this work, and we adhere to the usage licenses for each dataset. Users can follow the credentialing procedures outlined in Appendix C. However, we provide two preprocessed demo datasets out of the box for reproducibility and experimentation. The demo task cohorts for MIMIC-III and eICU mentioned in that section are derived from the official demo datasets published on PhysioNet by the original authors of the respective databases. Each demo dataset represents a small, curated subset of data that is freely accessible without any need for human subject training. Both demo datasets are published under an Open Data Commons Open Database License v1.0, which explicitly permits the adoption and sharing of the data. The original demo data, as well as further information, can be found at the MIMIC-III demo and eICU demo Physionet pages. 9 REPRODUCIBILITY STATEMENT We include the source code of YAIB (main benchmark), YAIB-cohorts (adaptable cohort extraction) and ReciPys (extensible preprocessing package) in our submission. Models for each task and architecture are publicly available. In the included source code, a file called PAPER.md describes the reproducibility steps of the experiments in this paper. Specifically, one requires the standalone codebase of YAIB-cohorts to first create the cohorts from the acquired data, once you have completed the required credentialing (see Appendix C for details). As mentioned, we include demo cohort data for each task (results for these cohorts are shown in Appendix B). Appendix D describes the data processing and task creation. The usage of YAIB is detailed in Appendix E. Appendix F shows how YAIB can be extended with new datasets, clinical concepts, tasks, models, and evaluation metrics. Additionally, we refer to the README.md and the wiki for the usage of YAIB. Appendix G and H detail the experiment design and chosen hyperparameters, respectively. Finally, Appendix I contains the machine learning reproducibility checklist for our work. https://github.com/rvandewater/YAIB https://github.com/rvandewater/YAIB-cohorts https://github.com/rvandewater/ReciPys https://github.com/rvandewater/YAIB-models https://github.com/rvandewater/YAIB/blob/master/PAPER.md https://github.com/rvandewater/YAIB/blob/master/README.md https://github.com/rvandewater/YAIB/wiki/YAIB-wiki-home
j8hdRqOUhN
The primary incentive for training the model in a lower-dimensional latent space appears to be a reduction in computational complexity. Nevertheless, the experimental outcomes presented in Table 1 and 2 demonstrate that the proposed algorithm surpasses conventional techniques in terms of reconstruction quality. Could you provide insights into the potential factors contributing to this outcome?
Solving Inverse Problems with Latent Diffusion Models via Hard Data Consistency Bowen Song\textsuperscript{1*}, Soo Min Kwon\textsuperscript{1*}, Zecheng Zhang\textsuperscript{2}, Xinyu Hu\textsuperscript{3}, Qing Qu\textsuperscript{1} Liyue Shen\textsuperscript{1} \textsuperscript{1}University of Michigan, \textsuperscript{2}Kumo.AI, \textsuperscript{3}Microsoft Abstract Latent diffusion models have been demonstrated to generate high-quality images, while offering efficiency in model training compared to diffusion models operating in the pixel space. However, incorporating latent diffusion models to solve inverse problems remains a challenging problem due to the nonlinearity of the encoder and decoder. To address these issues, we propose Resample, an algorithm that can solve general inverse problems with pre-trained latent diffusion models. Our algorithm incorporates data consistency by solving an optimization problem during the reverse sampling process, a concept that we term as hard data consistency. Upon solving this optimization problem, we propose a novel resampling scheme to map the measurement-consistent sample back onto the noisy data manifold and theoretically demonstrate its benefits. Lastly, we apply our algorithm to solve a wide range of linear and nonlinear inverse problems in both natural and medical images, demonstrating that our approach outperforms existing state-of-the-art approaches, including those based on pixel-space diffusion models. 1 Introduction Inverse problems arise from a wide range of applications across many domains, including computational imaging (Beck & Teboulle [2009], Afonso et al. [2011]), medical imaging (Suetens [2017], Ravishankar et al. [2019]), and remote sensing (Liu et al. [2021, 2022]), to name a few. When solving these inverse problems, the goal is to reconstruct an unknown signal $x_* \in \mathbb{R}^n$ given observed measurements $y \in \mathbb{R}^m$ of the form $$y = A(x_*) + \eta,$$ where $A(\cdot) : \mathbb{R}^n \rightarrow \mathbb{R}^m$ denotes some forward measurement operator (can be linear or nonlinear) and $\eta \in \mathbb{R}^m$ is additive noise. Usually, we are interested in the case when $m < n$, which follows many real-world scenarios. When $m < n$, the problem is ill-posed and some kind of regularizer (or prior) is necessary to obtain a meaningful solution. In the literature, the traditional approach of using hand-crafted priors (e.g., sparsity) is slowly being replaced by rich, learned priors such as deep generative models. Recently, there has been a lot of interests in using diffusion models as structural priors due to their state-of-the-art performance in image generation (Dhariwal & Nichol [2021a], Karras et al. [2022], Song et al. [2023b], Lou & Ermon [2023a]). Compared to generative adversarial networks (GANs), diffusion models are generally easier and more stable to train, making them a generative prior that is more readily accessible (Dhariwal & Nichol [2021b]). The most common approach for using diffusion models as priors is to resort to posterior sampling, which has been extensively explored in the literature (Song et al. [2022], Chung et al. [2023a, 2022], Kawar et al. [2022], Song et al. [2023a], Chung et al. [2023b], Meng & Kabashima [2022], Zhang & Zhou [2023]). However, despite their remarkable success, these techniques exhibit several limitations. The primary challenge is that the majority of existing works train these models directly in the pixel space, which requires substantial computational resources and a large volume of training data (Rombach et al. [2022]). Latent diffusion models (LDMs), which embed data in order to operate in a lower-dimensional space, present a potential solution to this challenge, along with considerable improvements in computational efficiency (Rombach et al. [2022], Vahdat et al. [2021]) by training diffusion models in a... Figure 1: Example reconstructions of our algorithm (ReSample) on two noisy inverse problems, nonlinear deblurring and CT reconstruction, on natural and medical images, respectively. compressed latent space. They can also provide a great amount of flexibility, as they can enable one to transfer and generalize these models to different domains by fine-tuning on small amounts of training data (Ruiz et al., 2023). Nevertheless, using LDMs to solve inverse problems poses a significant challenge. The main difficulty arises from the inherent nonlinearity and nonconvexity of the decoder, making it challenging to directly apply existing solvers designed for pixel space. To address this issue, a concurrent work by Rout et al. (2023) recently introduced a posterior sampling algorithm operating in the latent space (PSLD), designed to solve linear inverse problems with provable guarantees. However, we observe that PSLD may reconstruct images with artifacts in the presence of measurement noise. Therefore, developing an efficient algorithm capable of addressing these challenges remains an open research question. In this work, we introduce a novel algorithm named ReSample, which effectively employs LDMs as priors for solving general inverse problems. Our algorithm can be viewed as a two-stage process that incorporates data consistency by (1) solving a hard-constrained optimization problem, ensuring we obtain the correct latent variable that is consistent with the observed measurements, and (2) employing a carefully designed resampling scheme to map the measurement-consistent sample back onto the correct noisy data manifold. As a result, we show that our algorithm can achieve state-of-the-art performance on various inverse problem tasks and different datasets, compared to existing algorithms. Notably, owing to using the latent diffusion models as generative priors, our algorithm achieves a reduction in memory complexity. Below, we highlight some of our key contributions. • We propose a novel algorithm that enables us to leverage latent diffusion models for solving general inverse problems (linear and nonlinear) through hard data consistency. • Particularly, we carefully design a stochastic resampling scheme that can reliably map the measurement-consistent samples back onto the noisy data manifold to continue the reverse sampling process. We provide a theoretical analysis to further demonstrate the superiority of the proposed stochastic resampling technique. • With extensive experiments on multiple tasks and various datasets, encompassing both natural and medical images, our proposed algorithm achieves state-of-the-art performance on a variety of linear and nonlinear inverse problems. 2 BACKGROUND Denoising Diffusion Probabilistic Models. We first briefly review the basic fundamentals of diffusion models, namely the denoising diffusion probabilistic model (DDPM) formulation (Ho et al., 2020). Let \( x_0 \sim p_{\text{data}}(x) \) denote samples from the data distribution. Diffusion models start by progressively perturbing data to noise via Gaussian kernels, which can be written as the variance-preserving stochastic differential equation (VP-SDE) (Song et al., 2021) of the form \[ dx = -\frac{\beta_t}{2} x dt + \sqrt{\beta_t} dw, \] where \( \beta_t \in (0, 1) \) is the noise schedule that is a monotonically increasing sequence of \( t \) and \( w \) is the standard Wiener process. This is generally defined such that we obtain the data distribution when \( t = 0 \) and obtain a Gaussian distribution when \( t = T \), i.e. \( x_T \sim N(0, I) \). The objective of diffusion models is to learn the corresponding reverse SDE of Equation (1), which is of the form \[ dx = \left[ -\frac{\beta_t}{2} x - \beta_t \nabla_{x_t} \log p(x_t) \right] dt + \sqrt{\beta_t} d\tilde{w}, \] where \( d\tilde{w} \) is the standard Wiener process running backward in time and \( \nabla_{x_t} \log p(x_t) \) is the (Stein) score function. In practice, we approximate the score function using a neural network \( s_\theta \) parameterized by \( \theta \), which can be trained via denoising score matching (Vincent, 2011): \[ \hat{\theta} = \arg \min_\theta \mathbb{E} \left[ \| s_\theta(x_t, t) - \nabla_{x_t} \log p(x_t | x_0) \|_2^2 \right], \] where \( t \) is uniformly sampled from \([0, T]\) and the expectation is taken over \( t, x_t \sim p(x_t | x_0) \), and \( x_0 \sim p_{data}(x) \). Once we have access to the parameterized score function \( s_\theta \), we can use it to approximate the reverse-time SDE and simulate it using numerical solvers (e.g., Euler-Maruyama). **Denoising Diffusion Implicit Models.** As the DDPM formulation is known to have a slow sampling process, Song et al. (2020) proposed denoising diffusion implicit models (DDIMs) that define the diffusion process as a non-Markovian process to remedy this (Ho et al., 2020; Song et al., 2020, 2023b; Lu et al., 2022). This enables a faster sampling process with the sampling steps given by \[ x_{t-1} = \sqrt{\bar{\alpha}_{t-1}} \hat{x}_0(x_t) + \sqrt{1 - \bar{\alpha}_{t-1} - \eta \delta_t^2} s_\theta(x_t, t) + \eta \delta_t \epsilon, \quad t = T, \ldots, 0, \] where \( \alpha_t = 1 - \beta_t \), \( \bar{\alpha}_t = \prod_{i=1}^{t} \alpha_i \), \( \epsilon \sim \mathcal{N}(0, I) \), \( \eta \) is the temperature parameter, \( \delta_t \) controls the stochasticity of the update step, and \( \hat{x}_0(x_t) \) denotes the predicted \( x_0 \) from \( x_t \) which takes the form \[ \hat{x}_0(x_t) = \frac{1}{\sqrt{\bar{\alpha}_t}} (x_t + \sqrt{1 - \bar{\alpha}_t} s_\theta(x_t, t)), \] which is an application of Tweedie’s formula. Here, \( s_\theta \) is usually trained using the epsilon-matching score objective (Song et al., 2020). We use DDIM as the backbone of our algorithm and show how we can leverage these update steps for solving inverse problems. **Solving Inverse Problems with Diffusion Models.** Given measurements \( y \in \mathbb{R}^m \) from some forward measurement operator \( A(\cdot) \), we can use diffusion models to solve inverse problems by replacing the score function in Equation (2) with the conditional score function \( \nabla_{x_t} \log p(x_t | y) \). Then by Bayes rule, notice that we can write the conditional score as \[ \nabla_{x_t} \log p(x_t | y) = \nabla_{x_t} \log p(x_t) + \nabla_{x_t} \log p(y | x_t). \] This results in the reverse SDE of the form \[ dx = \left[ -\frac{\beta_t}{2} x - \beta_t (\nabla_{x_t} \log p(x_t) + \nabla_{x_t} \log p(y | x_t)) \right] dt + \sqrt{\beta_t} d\tilde{w}. \] In the literature, solving this reverse SDE is referred as posterior sampling. However, the issue with posterior sampling is that there does not exist an analytical formulation for the likelihood term \( \nabla_{x_t} \log p(y|x_t) \). To resolve this, there exists two lines of work: (1) to resort to alternating projections onto the measurement subspace to avoid using the likelihood directly (Chung et al., 2022; Kawar et al., 2022; Wang et al., 2022) and (2) to estimate the likelihood under some mild assumptions (Chung et al., 2023a; Song et al., 2023a). For example, Chung et al. (2023a) proposed diffusion posterior sampling (DPS) that uses a Laplacian approximation of the likelihood, which results in the discrete update steps \[ x_{t-1} = \sqrt{\alpha_{t-1}} \hat{x}_0(x_t) + \sqrt{1 - \bar{\alpha}_{t-1} - \eta \delta_t^2} s_\theta(x_t, t) + \eta \delta_t \epsilon \] \[ x_{t-1} = x'_{t-1} - \zeta \nabla_{x_t} \| y - A(\hat{x}_0(x_t)) \|_2^2, \] where \( \zeta \in \mathbb{R} \) can be viewed as a tunable step-size. However, as previously mentioned, these techniques have limited applicability for real-world problems as they are all built on the pixel space. **Solving Inverse Problems with Latent Diffusion Models.** The limited applicability of pixel-based diffusion models can be tackled by alternatively utilizing more efficient LDMs as generative priors. The setup for LDMs is the following: given an image \( x \in \mathbb{R}^n \), we have an encoder \( E : \mathbb{R}^n \to \mathbb{R}^k \) and a decoder \( D : \mathbb{R}^k \to \mathbb{R}^n \) where \( k \ll n \). Let \( z = E(x) \in \mathbb{R}^k \) denote the embedded samples in the latent space. One way of incorporating LDMs to solve inverse problems would be to replace the update steps in Equation (6) with \[ z'_{t-1} = \sqrt{\alpha_{t-1}} \hat{z}_0(z_t) + \sqrt{1 - \bar{\alpha}_{t-1} - \eta \delta_t^2} s_\theta(z_t, t) + \eta \delta_t \epsilon, \] \[ z_{t-1} = z'_{t-1} - \zeta \nabla_{z_t} \| y - A(D(\hat{z}_0(z_t))) \|_2^2. \] After incorporating LDMs, this can be viewed as a non-linear inverse problem due to the non-linearity of the decoder \( D(\cdot) \). As this builds upon the idea behind DPS, we refer to this algorithm as Latent-DPS. While this formulation seems to work, we empirically observe that Latent-DPS often produces reconstructions that are often noisy or blurry and inconsistent with the measurements. We conjecture that since the forward operator involving the decoder is highly nonconvex, the gradient update may lead towards a local minimum. We provide more insights in Appendix Section D. ### 3 ReSample: Inverse Problems using Latent Diffusion Models #### 3.1 Proposed Method **Hard Data Consistency.** Similar to Latent-DPS, our algorithm involves incorporating data consistency into the reverse sampling process of LDMs. However, rather than a gradient update as shown in Equation (9), we propose to solve an optimization problem on some time steps \( t \): \[ \hat{z}_0(y) \in \arg \min_z \frac{1}{2} \| y - A(D(z)) \|_2^2, \] where we denote \( \hat{z}_0(y) \) as the sample consistent with the measurements \( y \in \mathbb{R}^m \). This optimization problem has been previously explored in other works that use GANs for solving inverse problems, and can be efficiently solved using iterative solvers such as gradient descent (Bora et al., 2017; Jalal et al., 2021; Shah et al., 2021; Lempitsky et al., 2018). However, it is well known that solving this problem starting from a random initial point may lead to unfavorable local minima (Bora et al., 2017). To address this, we solve Equation (10) starting from an initial point \( \hat{z}_0(z_{t+1}) \), where \( z_0(z_{t+1}) \) is the estimate of ground-truth latent vector at time 0 based on the sample at time \( t + 1 \). The intuition behind this initialization is that we want to start the optimization process within local proximity of the global solution of Equation (10), to prevent resulting in a local minimum. We term this overall concept as hard data consistency, as we strictly enforce the measurements via optimization, rather than a “soft” approach through gradient update like Latent-DPS. To obtain \( \hat{z}_0(z_{t+1}) \), we use Tweedie’s formula (Efron, 2011) that gives us an approximation of the posterior mean which takes the following formula: \[ \hat{z}_0(z_t) = \mathbb{E}[z_0|z_t] = \frac{1}{\sqrt{\alpha_t}} (z_t + (1 - \bar{\alpha}_t) \nabla \log p(z_t)). \] Algorithm 1 ReSample: Solving Inverse Problems with Latent Diffusion Models Require: Measurements \( y \), \( A(\cdot) \), Encoder \( E(\cdot) \), Decoder \( D(\cdot) \), Score function \( s_\theta(\cdot; t) \), Pretrained LDM Parameters \( \beta_t, \alpha_t, \eta, \delta \), Hyperparameter \( \gamma \) to control \( \sigma^2_t \), Time steps to perform resample \( C \) \[ z_T \sim \mathcal{N}(0, I) \] for \( t = T - 1, \ldots, 0 \) do \[ \epsilon_t \sim \mathcal{N}(0, I) \] \[ \hat{\epsilon}_{t+1} = s_\theta(z_{t+1}, t + 1) \] \[ \hat{z}_0(z_{t+1}) = \frac{1}{\sqrt{\alpha_{t+1}}} (z_{t+1} - \sqrt{1 - \alpha_{t+1}} \hat{\epsilon}_{t+1}) \] \[ z'_t = \sqrt{\alpha_t} \hat{z}_0(z_{t+1}) + \sqrt{1 - \alpha_t - \eta \delta^2} \hat{\epsilon}_{t+1} + \eta \delta \epsilon_1 \] if \( t \in C \) then \[ \hat{z}_0(y) \in \arg \min_z \frac{1}{2} \| y - A(D(z)) \|^2_z \] \( z_t = \text{StochasticResample}(\hat{z}_0(y), z'_t, \gamma) \) else \[ z_t = z'_t \] \( x_0 = D(z_0) \) Output reconstructed image However, we would like to note that performing hard data consistency on every reverse sampling iteration \( t \) may be very costly. To address this, we first observe that as we approach \( t = T \), the estimated \( \hat{z}_0(z_{t+1}) \) can deviate significantly from the ground truth. In this regime, we find that hard data consistency provides only marginal benefits. Additionally, in the literature, existing works point out the existence of a three-stage phenomenon (Yu et al., 2023), where they demonstrate that data consistency is primarily beneficial for the semantic and refinement stages (the latter two stages when \( t \) is closer to 0) of the sampling process. Following this reasoning, we divide \( T \) into three sub-intervals and only apply the optimization in the latter two intervals. This approach provides both computational efficiency and accurate estimates of \( \hat{z}_0(z_{t+1}) \). Furthermore, even during these two intervals, we observe that we do not need to solve the optimization problem on every iteration \( t \). Because of the continuity of the sampling process, after each data-consistency optimization step, the samples in the following steps can retain similar semantic or structural information to some extent. Thus, we “reinforce” the data consistency constraint during the sampling process via a skipped-step mechanism. Empirically, we see that it is sufficient to perform this on every 10 (or so) iterations of \( t \). One can think of hard data consistency as guiding the sampling process towards the ground truth signal \( x^* \) (or respectively \( z^* \)) such that it is consistent with the given measurements. Lastly, in the presence of measurement noise, minimizing Equation (10) to zero loss can lead to overfitting the noise. To remedy this, we perform early stopping, where we only minimize up to a threshold \( \tau \) based on the noise level. We will discuss the details of the optimization process in the Appendix. We also observe that an additional Latent-DPS step after unconditional sampling can (sometimes) marginally increase the overall performance. We perform an ablation study on the performance of including Latent-DPS in the Appendix. Remapping Back to \( z_t \). Following the flowchart in Figure 2, the next step is to map the measurement-consistent sample \( \hat{z}_0(y) \) back onto the data manifold defined by the noisy samples at time \( t \) to continue the reverse sampling process. Doing so would be equivalent to computing the posterior distribution \( p(z_t|y) \). To incorporate \( \hat{z}_0(y) \) into the posterior, we propose to construct an auxiliary distribution \( p(\hat{z}_t|\hat{z}_0(y), y) \) to replace \( p(z_t|y) \). Here, \( \hat{z}_t \) denotes the remapped sample and \( z'_t \) denotes the unconditional sample before remapping. One simple way of computing this distribution to obtain \( \hat{z}_t \) is shown in Proposition 1. Proposition 1 (Stochastic Encoding). Since the sample \( \hat{z}_t \) given \( \hat{z}_0(y) \) and measurement \( y \) is conditionally independent of \( y \), we have that \[ p(\hat{z}_t|\hat{z}_0(y), y) = p(\hat{z}_t|\hat{z}_0(y)) = \mathcal{N}(\sqrt{\alpha_t} \hat{z}_0(y), (1 - \bar{\alpha}_t) I). \] We defer all of the proofs to the Appendix. Proposition 1 provides us a way of computing \( \hat{z}_t \), which we refer to as stochastic encoding. However, we observe that using stochastic encoding can incur a high variance when \( t \) is farther away from \( t = 0 \), where the ground truth signal exists. This large variance can often lead to noisy image reconstructions. To address this issue, we propose a posterior sampling technique that reduces the variance by additionally conditioning on $z'_t$, the unconditional sample at time $t$. Here, the intuition is that by using information of $z'_t$, we can get closer to the ground truth $z_t$, which effectively reduces the variance. In Lemma 2, under some mild assumptions, we show that this new distribution $p(\hat{z}_t|z'_t, \hat{z}_0(y), y)$ is a tractable Gaussian distribution. **Proposition 2 (Stochastic Resampling).** Suppose that $p(z'_t|\hat{z}_t, \hat{z}_0(y), y)$ is normally distributed such that $p(z'_t|\hat{z}_t, \hat{z}_0(y), y) = N(\mu_t, \sigma^2_t)$. If we let $p(\hat{z}_t|\hat{z}_0(y), y)$ be a prior for $\mu_t$, then the posterior distribution $p(\hat{z}_t|z'_t, \hat{z}_0(y), y)$ is given by $$p(\hat{z}_t|z'_t, \hat{z}_0(y), y) = N\left(\frac{\sigma^2_t \sqrt{\alpha_t} \hat{z}_0(y) + (1 - \bar{\alpha}_t) z'_t}{\sigma^2_t + (1 - \bar{\alpha}_t)}, \frac{\sigma^2_t (1 - \bar{\alpha}_t)}{\sigma^2_t + (1 - \bar{\alpha}_t)} I\right).$$ We refer to this new mapping technique as *stochastic resampling*. Since we do not have access to $\sigma^2_t$, it serves as a hyperparameter that we tune in our algorithm. The choice of $\sigma^2_t$ plays a role of controlling the tradeoff between prior consistency and data consistency. If $\sigma^2_t \to 0$, then we recover unconditional sampling, and if $\sigma^2_t \to \infty$, we recover stochastic encoding. We observe that this new technique also has several desirable properties, for which we rigorously prove in the next section. ### 3.2 Theoretical Results In Section 3.1, we discussed that stochastic resampling induces less variance than stochastic encoding. Here, we aim to rigorously prove the validity of this statement. **Lemma 1.** Let $\tilde{z}_t$ and $\hat{z}_t$ denote the stochastically encoded and resampled image of $\hat{z}_0(y)$, respectively. If $\text{VAR}(z'_t) > 0$, then we have that $\text{VAR}(\tilde{z}_t) < \text{VAR}(\hat{z}_t)$. **Theorem 1.** If $\hat{z}_0(y)$ is measurement-consistent such that $y = A(D(\hat{z}_0(y)))$, i.e. $\hat{z}_0 = \hat{z}_0(z_{t+1}) = \hat{z}_0(y)$, then stochastic resample is unbiased such that $\mathbb{E}[\hat{z}_t|y] = \mathbb{E}[z'_t]$. These two results, Lemma 1 and Theorem 1, prove the benefits of stochastic resampling. At a high-level, these proofs rely on the fact the posterior distributions of both stochastic encoding and resampling are Gaussian and compare their respective means and variances. In the following result, we characterize the variance induced by stochastic resampling, and show that as $t \to 0$, the variance decreases, giving us a reconstructed image that is of better quality. **Theorem 2.** Let $z_0$ denote a sample from the data distribution and $z_t$ be a sample from the noisy perturbed distribution at time $t$. Then, $$\text{Cov}(z_0|z_t) = \frac{(1 - \bar{\alpha}_t)^2}{\bar{\alpha}_t} \nabla^2_{z_t} \log p_{z_t}(z_t) + \frac{1 - \bar{\alpha}_t}{\bar{\alpha}_t} I.$$ By Theorem 2, notice that since $\alpha_t$ is an increasing sequence that converges to 1 as $t$ decreases, the variance between the ground truth $z_0$ and the estimated $\hat{z}_0$ decreases to 0 as $t \to 0$, assuming that $\nabla^2_{z_t} \log p_{z_t}(z_t) < \infty$. Following our theory, we empirically show that stochastic resampling can reconstruct signals that are less noisy than stochastic encoding, as shown in the next section. ### 4 Experiments We conduct experiments to solve both linear and nonlinear inverse problems on natural and medical images. We compare our algorithm to several state-of-the-art methods that directly apply the diffusion models that are trained in the pixel space: DPS [Chung et al., 2023a], Manifold Constrained | Method | LPIPS↓ | Super Resolution 1× PSNR↑ | SSIM↑ | LPIPS↓ | Inpainting (Random 70%) PSNR↑ | SSIM↑ | |-----------------|--------|---------------------------|-------|--------|-------------------------------|-------| | DPS [Chung et al., 2023a] | 0.173 ± 0.04 | 28.41 ± 2.20 | 0.782 ± 0.06 | 0.102 ± 0.02 | 32.48 ± 2.30 | 0.899 ± 0.03 | | MCG [Chung et al., 2023] | 0.193 ± 0.03 | 25.92 ± 2.35 | 0.740 ± 0.05 | 0.134 ± 0.03 | 29.53 ± 2.70 | 0.847 ± 0.05 | | ADMM-PnP [Ahmad et al., 2019] | 0.304 ± 0.04 | 21.08 ± 3.13 | 0.631 ± 0.11 | 0.627 ± 0.07 | 15.40 ± 2.09 | 0.342 ± 0.09 | | DDRM [Kawar et al., 2022] | 0.151 ± 0.03 | 29.49 ± 1.93 | 0.817 ± 0.05 | 0.166 ± 0.03 | 27.69 ± 1.54 | 0.798 ± 0.04 | | DMPS [Meng & Kanashima, 2022] | 0.147 ± 0.03 | 28.48 ± 1.92 | 0.811 ± 0.05 | 0.175 ± 0.03 | 28.84 ± 1.65 | 0.826 ± 0.03 | | Latent-DPS | 0.272 ± 0.05 | 26.83 ± 2.00 | 0.691 ± 0.07 | 0.226 ± 0.04 | 26.23 ± 1.84 | 0.703 ± 0.07 | | PSLD-LDM* [Rout et al., 2023] | 0.209 ± 0.10 | 27.61 ± 2.95 | 0.704 ± 0.17 | 0.260 ± 0.08 | 27.07 ± 2.45 | 0.689 ± 0.11 | | ReSample (Ours) | 0.144 ± 0.029 | 30.45 ± 2.09 | 0.832 ± 0.05 | 0.082 ± 0.02 | 32.77 ± 2.23 | 0.903 ± 0.03 | Table 1: Quantitative results of super resolution and inpainting on the CelebA-HQ dataset. Input images have an additive Gaussian noise with $\sigma_y = 0.01$. Best results are in bold and second best results are underlined. Table 2: Quantitative results of Gaussian and nonlinear deblurring on the CelebA-HQ dataset. Input images have an additive Gaussian noise with $\sigma_y = 0.01$. Best results are in bold and second best results are underlined. For nonlinear deblurring, some baselines are omitted, as they can only solve linear inverse problems. | Method | LPIPS↓ | PSNR↑ | SSIM↑ | LPIPS↓ | PSNR↑ | SSIM↑ | |-----------------|--------|-------|-------|--------|-------|-------| | DPS (Chung et al., 2023a) | 0.230 ± 0.065 | 26.81 ± 2.84 | 0.720 ± 0.077 | 0.175 ± 0.03 | 28.36 ± 2.12 | 0.772 ± 0.07 | | MCG (Chung et al., 2022) | - | - | - | 0.517 ± 0.06 | 15.85 ± 1.08 | 0.536 ± 0.08 | | ADMM-PnP (Ahmad et al., 2019) | 0.499 ± 0.073 | 16.17 ± 4.01 | 0.359 ± 0.140 | 0.289 ± 0.04 | 20.98 ± 4.51 | 0.602 ± 0.15 | | DDRM (Kawar et al., 2022) | - | - | - | 0.193 ± 0.04 | 26.88 ± 1.96 | 0.747 ± 0.07 | | DMPS (Meng & Kabashima, 2022) | - | - | - | 0.206 ± 0.04 | 26.45 ± 1.83 | 0.726 ± 0.07 | | Latent-DPS | 0.225 ± 0.04 | 26.18 ± 1.73 | 0.703 ± 0.07 | 0.205 ± 0.04 | 27.42 ± 1.84 | 0.729 ± 0.07 | | PSLD-LDM* (Rout et al., 2023) | - | - | - | 0.323 ± 0.09 | 24.21 ± 2.79 | 0.548 ± 0.12 | | ReSample (Ours) | **0.153 ± 0.03** | **30.18 ± 2.21** | **0.828 ± 0.05** | **0.148 ± 0.04** | **30.69 ± 2.14** | **0.832 ± 0.05** | Table 3: Quantitative results of CT reconstruction on the LDCT dataset. Best results are in bold and second best results are underlined. | Method | Abdominal | Head | Chest | |-----------------|-----------|------|-------| | Latent-DPS | 26.80 ± 1.09 | 0.870 ± 0.026 | 28.64 ± 5.38 | 0.893 ± 0.058 | 25.67 ± 1.14 | 0.822 ± 0.033 | | MCG (Chung et al., 2022) | 29.41 ± 3.14 | 0.857 ± 0.041 | 28.28 ± 3.08 | 0.795 ± 0.116 | 27.92 ± 2.48 | 0.842 ± 0.036 | | DPS (Chung et al., 2023a) | 27.33 ± 2.68 | 0.715 ± 0.031 | 24.51 ± 2.77 | 0.665 ± 0.058 | 24.73 ± 1.84 | 0.682 ± 0.113 | | FBP (Tilton et al., 2021) | 32.84 ± 1.29 | 0.942 ± 0.008 | 33.45 ± 3.25 | 0.945 ± 0.023 | 29.67 ± 1.14 | 0.891 ± 0.011 | | FBP-Unet (Jin et al., 2017) | 32.77 ± 1.21 | 0.937 ± 0.013 | 31.95 ± 3.32 | 0.917 ± 0.048 | 29.78 ± 1.12 | 0.885 ± 0.016 | | ReSample (Ours) | **35.91 ± 1.22** | **0.965 ± 0.007** | **37.82 ± 5.31** | **0.978 ± 0.014** | **31.72 ± 0.912** | **0.922 ± 0.011** | Gradients (MCG) (Chung et al., 2022), Denoising Diffusion Destoration Models (DDRM) (Kawar et al., 2022), Diffusion Model Posterior Sampling (DMPS) (Meng & Kabashima, 2022). Then we compare an algorithm that uses a plug-and-play approach that apply a pretrained deep denoiser for inverse problems (ADMM-PnP) (Ahmad et al., 2019). We also compare our algorithm to Latent-DPS and Posterior Sampling with Latent Diffusion (PSLD) (Rout et al., 2023), a concurrent work we recently notice also tackling latent diffusion models. Various quantitative metrics are used for evaluation including Learned Perceptual Image Patch Similarity (LPIPS) distance, peak signal-to-noise-ratio (PSNR), and structural similarity index (SSIM). Lastly, we conduct ablation study to compare the performance of stochastic encoding and our proposed stochastic resampling technique as mentioned in Section 3.1 and also demonstrate the memory efficiency gained by leveraging LDMs. Experiments on Natural Images. For the experiments on natural images, we use datasets FFHQ (Kazemi & Sullivan, 2014), CelebA-HQ (Liu et al., 2015), and LSUN-Bedroom (Yu et al., 2016) with the image resolution of $256 \times 256 \times 3$. We take pre-trained latent diffusion models LDM-VQ4 trained on FFHQ and CelebA-HQ provided by (Rombach et al., 2022) with autoencoders that yield images of size $64 \times 64 \times 3$, and DDPMs (Ho et al., 2020) also trained on FFHQ and CelebA-HQ training sets. Then, we sample 100 images from both the FFHQ and CelebA-HQ validation sets for testing evaluation. For computing quantitative results, all images are normalized to the range $[0, 1]$. All experiments had Gaussian measurement noise with standard deviation $\sigma_y = 0.01$. Due to limited space, we put the results on FFHQ and details of the hyperparameters to the Appendix. For linear inverse problems, we consider the following tasks: (1) Gaussian deblurring, (2) inpainting (with a random mask), and (3) super resolution. For Gaussian deblurring, we use a kernel with size $61 \times 61$ with standard deviation $3.0$. For super resolution, we use bicubic downsampling and a random mask with varying levels of missing pixels for inpainting. For nonlinear inverse problems, we consider nonlinear deblurring as proposed by (Chung et al., 2023a). The quantitative results are displayed in Tables 1 and 2 with qualitative results in Figure 3. In Tables 1 and 2, we can see that ReSample significantly outperforms all of the baselines across all three metrics on the CelebA-HQ dataset. We also observe that ReSample performs better than or comparable to all baselines on the FFHQ dataset as shown in the Appendix. Remarkably, our method excels in handling nonlinear inverse problems, further demonstrating the flexibility of our algorithm. We further demonstrate the superiority of ReSample for handling nonlinear inverse problems in Figure 6a, where we show that we can consistently outperform DPS. *We have updated the baseline results for PSLD. More details are provided in the Appendix (Section A.4).* Figure 3: Qualitative results of multiple tasks on the LSUN-Bedroom and CelebA-HQ datasets. All inverse problems have Gaussian measurement noise with variance $\sigma_y = 0.01$. Figure 4: Qualitative results of CT reconstruction on the LDCT dataset. We annotate the critical image structures in a red box, and zoom in below the image. Experiments on Medical Images. For experiments on medical images, we fine-tune LDMs on 2000 2D computed tomography (CT) images with image resolution of $256 \times 256$, randomly sampled from the AAPM LDCT dataset (Moen et al., 2021) of 40 patients, and test on the 300 2D CT images from the remaining 10 patients. We take the FFHQ LDM-VQ4 model provided by Rombach et al. (2022) as the pre-trained LDM followed by 100K fine-tuning iterations. Then, we simulate CT measurements (sinograms) with a parallel-beam geometry using 25 projection angles equally distributed across 180 degrees using the torch-radon package (Ronchetti, 2020). Following the natural images, the sinograms were perturbed with additive Gaussian measurement noise with $\sigma_y = 0.01$. Along with the baselines used in the natural image experiments, we also compare to the following methods: (1) Filtered Backprojection (FBP), which is a standard CT reconstruction technique, and (2) FBP-UNet, a supervised method that uses a UNet model as its backbone. For FBP-Unet and PnP-UNet, we trained a model on 3480 2D CT from the training set. For MCG and DPS, we used the pre-trained checkpoints provided by Chung et al. (2022). We present visual reconstructions with highlighted critical anatomic structures in Figure 4 and quantitative results in Table 3. Here, we observe that our algorithm outperforms all of the baselines in terms of PSNR and SSIM. Visually, we also observe that our algorithm is able to reconstruct smoother images that have more accurate and sharper details. Effectiveness of the Resampling Technique. Here, we validate our theoretical results by conducting ablation studies on stochastic resampling. Specifically, we perform experiments on the LSUN-Bedroom and CelebA-HQ datasets with tasks of Gaussian deblurring and super-resolution. As shown in Figure 5, we observe that stochastic resampling reconstructs smoother images with higher PSNRs compared to stochastic encoding, corroborating our theory. Figure 5: **Effectiveness of our resampling technique compared to stochastic encoding.** Results were demonstrated on the LSUN-Bedroom and CelebA-HQ datasets with measurement noise of $\sigma_y = 0.05$ to highlight the effectiveness of stochastic resample. (a) More examples of nonlinear deblurring (b) Performance v.s. number of time steps to perform resample Figure 6: Left: Additional results on nonlinear deblurring highlighting the performance of ReSample. Right: Ablation study on the ReSample frequency on the performance. **Effectiveness of Hard Data Consistency.** In Figure 6b we perform an ablation study on the ReSample frequency on CT reconstruction. This observation is in line with what we expect intuitively, as more ReSample time steps (i.e., more data consistency) lead to more accurate reconstructions. **Memory Efficiency.** To demonstrate memory efficiency, we use the command `nvidia-smi` to monitor the memory consumption during solving an inverse problem. We present the memory usage for Gaussian deblurring on the FFHQ dataset in Table 4. Although the entire LDM models occupy more memory due to the autoencoders, our algorithm itself exhibits memory efficiency, resulting in lower overall memory usage. This highlights its potential in domains like medical imaging, where memory plays a crucial role in feasibility. | Model | Algorithm | Model Only | Memory Increment | Total | |-------|-----------|------------|------------------|-------| | DDPM | DPS | 1953MB | +3416MB (175%) | 5369MB| | | MCG | | +3421MB (175%) | 5374MB| | | DMPS | | +5215MB (267 %) | 7168MB| | | DDRM | | +18833MB (964 %) | 20786MB| | LDM | PSLD | 3969MB | +5516MB (140%) | 9485MB| | | ReSample | | +1040MB (26.2 %) | 5009MB| Table 4: Memory Usage of Different Methods for Gaussian Deblurring on the FFHQ Dataset 5 CONCLUSION In this paper, we propose ReSample, an algorithm that can effectively leverage LDMs to solve general inverse problems. We demonstrated that our algorithm can reconstruct high-quality images compared to many baselines, including those in the pixel space. One limitation of our method lies in the computational overhead of hard data consistency, which we leave as a significant challenge for future work to address and improve upon. 6 REPRODUCIBILITY STATEMENT To ensure the reproducibility of our results, we thoroughly detail the hyperparameters employed in our algorithm in the Appendix. Additionally, we provide a comprehensive explanation of the configuration of all baselines used in our experiments. As we use pre-trained diffusion models throughout our experiments, they are readily accessible online. Lastly, our code is available at https://github.com/soominkwon/resample. 7 ACKNOWLEDGEMENTS BS and LS acknowledges support from U-M MIDAS PODS Grant and U-M MICDE Catalyst Grant, and computing resource support from NSF ACCESS Program and Google Cloud Research Credits Program. This work used NCSA Delta GPU through allocation CIS230133 and ELE230011 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296. SMK and QQ acknowledge support from U-M START & PODS grants, NSF CAREER CCF-2143904, NSF CCF-2212066, NSF CCF-2212326, ONR N00014-22-1-2529, AWS AI Award, and a gift grant from KLA. REFERENCES Manya V. Afonso, José M. Bioucas-Dias, and Mário A. T. Figueiredo. An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Transactions on Image Processing, 20(3):681–695, 2011. doi: 10.1109/TIP.2010.2076294. Rizwan Ahmad, Charles A Bouman, Gregery T Buzzard, Stanley H Chan, Edward T Reehorst, and Philip Schniter. Plug and play methods for magnetic resonance imaging. 2019. Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. doi: 10.1137/080716542. URL https://doi.org/10.1137/080716542. Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G Dimakis. Compressed sensing using generative models. In International Conference on Machine Learning, pp. 537–546. PMLR, 2017. Paul A. Bromiley. Products and convolutions of gaussian probability density functions. 2013. Hyungjin Chung, Byeongsu Sim, Dohoon Ryu, and Jong Chul Ye. Improving diffusion models for inverse problems using manifold constraints. arXiv preprint arXiv:2206.00941, 2022. Hyungjin Chung, Jeongsol Kim, Michael Thompson Mccann, Marc Louis Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. In The Eleventh International Conference on Learning Representations, 2023a. URL https://openreview.net/forum?id=OnD9zGAGT0k. Hyungjin Chung, Suhyeon Lee, and Jong Chul Ye. Fast diffusion sampler for inverse problems by geometric decomposition. arXiv preprint arXiv:2303.05754, 2023b. Hyungjin Chung, Jong Chul Ye, Peyman Milanfar, and Mauricio Delbracio. Prompt-tuning latent diffusion models for inverse problems. arXiv preprint arXiv:2310.01110, 2023c. Prafulla Dhariwal and Alex Nichol. Diffusion models beat gans on image synthesis. arXiv preprint arXiv:2105.05233, 2021a. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780–8794, 2021b. Bradley Efron. Tweedie’s formula and selection bias. Journal of the American Statistical Association, 106(496):1602–1614, 2011. doi: 10.1198/jasa.2011.tm11181. URL https://doi.org/10.1198/jasa.2011.tm11181.
qKhpp9YAkO
If I'm correct, the t is not the iteration time step -- it is the batch time step -- but perhaps an iteration time step is implicit. In that case, the expression for ˆξt in Eq (8) right, could be understood as the minimum of ξt over these implicit time steps. Is all this correct?
ASSOCIATIVE TRANSFORMER IS A SPARSE REPRESENTATION LEARNER Anonymous authors Paper under double-blind review ABSTRACT Emerging from the monolithic pairwise attention mechanism in conventional Transformer models, there is a growing interest in leveraging sparse interactions that align more closely with biological principles. Approaches including the Set Transformer and the Perceiver employ cross-attention consolidated with a latent space that forms an attention bottleneck with limited capacity. Building upon recent neuroscience studies of Global Workspace Theory and associative memory, we propose the Associative Transformer (AiT). AiT induces low-rank explicit memory that serves as both priors to guide bottleneck attention in the shared workspace and attractors within associative memory of a Hopfield network. Through joint end-to-end training, these priors naturally develop module specialization, each contributing a distinct inductive bias to form attention bottlenecks. A bottleneck can foster competition among inputs for writing information into the memory. We show that AiT is a sparse representation learner, learning distinct priors through the bottlenecks that are complexity-invariant to input quantities and dimensions. AiT demonstrates its superiority over methods such as the Set Transformer, Vision Transformer, and Coordination in various vision tasks. 1 INTRODUCTION The predominant paradigm in conventional deep neural networks has been characterized by a monolithic architecture, wherein each input sample is subjected to uniform processing within a singular model framework. For instance, Transformer models use pairwise attention to establish correlations among disparate segments of input information [Vaswani et al., 2017; Dosovitskiy et al., 2021]. Emerging from the pair-wise attention mechanism, there is a growing interest in leveraging modular and sparse interactions that align more closely with biological principles. This sparsity attribute has demonstrated advantages in enhancing model performance and learning efficiency, making it a crucial element for intelligent entity learning [Brooks, 1991; Greff et al., 2020; Minsky, 1986]. Modularization of knowledge can find resonance with the neuroscientific grounding of the Global Workspace Theory (GWT) [Baars, 1988; Dehaene & Changeux, 1998; VanRullen & Kanai, 2020; Juliani et al., 2022]. GWT explains a fundamental cognitive architecture for information processing within the brain, where diverse specialized modules compete to write information into a shared workspace through a communication bottleneck. The bottleneck facilitates the processing of content-addressable information through attention that is guided by working memory [Awh et al., 2006; Gazzaley & Nobre, 2017]. The coordination method [Goyal et al., 2022b] represents the initial attempt to assess the effectiveness of GWT in conventional neural network models. Unfortunately, this method relies on iterative cross-attention for both information writing and retrieval within the shared workspace. When examining information retrieval in the human brain, it is evident that memory typically encompasses both working memory and long-term memory in the hippocampus. Specifically, the hippocampus operates on Hebbian learning for retrieving information from working memory, akin to the associative memory found in Hopfield networks [Hopfield, 2007; Ramsauer et al., 2021]. Our research has revealed that replacing such a repetitive attention-based mechanism with a consolidated, more biologically-plausible associative memory can lead to improved model performance. Associative memory has the capability to directly store and retrieve patterns from the shared workspace without the need for additional parameters by relying on an energy function, which fundamentally differs from an attention mechanism. Our objective is to introduce a shared workspace augmented with associative memory into a Transformer model, thereby facilitating a more comprehensive and efficient association of information fragments. To this end, we propose the Associative Transformer (AiT) based on a novel global workspace layer augmented by associative memory. The global workspace layer entails three main components: 1) the squash layer: input data is transformed into a list of patches regardless of which samples they come from, 2) the bottleneck attention: patches are sparsely selected to learn a set of priors in low-rank memory based on a bottleneck attention mechanism, and 3) the Hopfield network: information is broadcast from the shared workspace to update the current input based on the associative memory of a Hopfield network. Moreover, the bottleneck attention and the low-rank memory contributes to reduced model complexity. However, cascading multiple of these components may lead to difficulty in the emergence of specialized priors in explicit memory. As information flows through multiple layers, it becomes more challenging to maintain specialized priors from diluted representations. Consequently, learning specialized priors in layers cascaded in depth requires a mechanism that counteracts this inherent loss of input specificity. To overcome this challenge, we propose the bottleneck attention balance loss to encourage the diverse selection of inputs in the shared workspace. Through end-to-end training, we show the emerging specialization of low-rank priors, contributing to enhanced performance in vision tasks. This distinguishes our work from previous literature, which relied on latent memory comprising indistinct priors with the same dimension as the input, such as Set Transformer Lee et al. (2019), Perceiver Jaegle et al. (2021), and Luna Ma et al. (2021). The no-free-lunch theorem Baxter (2000); Goyal & Bengio (2020b) states that a set of inductive bias over the space of all functions is necessary to obtain generalization. We demonstrate that the specialization of priors serves as critical inductive biases, encouraging competition among input data and inducing sparsity in the attention mechanism of Transformer models. Overall, the main contributions of this work are as follows. (1) This work proposes a more biologically plausible learning framework called Associative Transformer (AiT) based on the Global Workspace Theory and associative memory. (2) AiT is a sparse representation learner, leveraging sparse bottleneck attention enhanced by a novel attention balance loss to acquire naturally emerging specialized priors. (3) We devise low-rank priors that are adaptively encoded and decoded for increased memory capacity. AiT can learn a large set of specialized priors (up to 128) from a diverse pool of patches (up to 32.8k). (4) The learned priors serve as attractors within the associative memory of a Hopfield network, enabling information broadcast from the workspace. This is the first work to incorporate the Hopfield network as an integral element in a sparse attention mechanism. 2 RELATED WORK This section provides a summary of relevant research concerning sparse attention architectures. We investigate and compare these studies based on their relatedness to the global workspace theory in terms of several key conditions (please see Appendix A.2 for a complete comparison). Transformer models do not possess inductive biases that allow the model to attend to different segments of the input data Goyal & Bengio (2020a). To enhance Transformer models, studies of sparse attention architectures explored consolidating latent memory to extract contextual representations from input data Gupta & Berant (2020); Jaegle et al. (2021); Goyal et al. (2022b); Jaegle et al. (2022); Lee et al. (2019); Ma et al. (2021). For instance, Perceiver Jaegle et al. (2021) and Perceiver IO Jaegle et al. (2022) used iterative cross-attention with a latent array as priors and a latent transformation applied to the priors, to capture dependencies across input data. Set Transformer Lee et al. (2019) and Linear Unified Nested Attention (Luna) Ma et al. (2021) employed iterative cross-attention, but without using a latent transformation. Other attention mechanisms that rely on strong inductive biases with predefined network modularization are omitted Qu et al. (2020). In our method, distinct priors naturally emerge through end-to-end training. Moreover, the previous methods using latent memory necessitated priors with the same dimension as the input. In contrast, we devise low-rank priors that can be encoded and decoded adaptively for increased memory capacity. In the same vein of building sparse attention mechanisms through a shared workspace, Coordination Goyal et al. (2022b) used iterative cross-attentions via a bottleneck to encourage more effective module communication. They argued that more flexibility and generalization could emerge through the competition of specialized modules. However, the priors in the coordination method possess the same dimension as the input, and the number of priors is limited to fewer than 10. The evaluation was also restricted to simple tasks. Unlike the coordination method, we propose low-rank explicit memory to learn a larger set of specialized priors (up to 128) from a pool of patches (up to 32.8k). Moreover, the coordination method relies on iterative cross-attentions to learn such priors, while this work focuses on a novel learning method of associative memory-augmented attention. Furthermore, external memory such as tape storage and associative memory has been successfully employed [Graves et al. (2014); Gülçehre et al. (2018); Krotov & Hopfield (2016); Hoover et al. (2023)]. Recent studies explored the potential use of Hopfield networks [Hopfield (2007)] and their modern variants [Demircigil et al. (2017); Ramsauer et al. (2021)] in Transformers. In contrast to these investigations, we incorporate Hopfield networks as an integral element in constructing the global workspace layer, functioning as a mechanism for information broadcast in the shared workspace. This goal is fundamentally different from prior studies focused on using Hopfield networks independently of the attention mechanism. 3 INSPECTING ATTENTION HEADS IN VISION TRANSFORMERS Vision Transformers (ViT) tackle image classification tasks by processing sequences of image patches. The pre-processing layer partitions an image into non-overlapping patches, followed by a learnable linear projection layer. Let \( x \in \mathbb{R}^{H \times W \times C} \) be an input, where \((H, W)\) is the resolution of the image and \(C\) is the number of channels. \(x\) is separated into a sequence of patches \(x_p \in \mathbb{R}^{N \times (P^2 \times C)}\), where \((P, P)\) is the resolution of each image patch and \(N = \frac{HW}{P^2}\) is the number of patches. These patches are mapped to embeddings \(v_p \in \mathbb{R}^{N \times E}\) with the linear projection. ViT leverages self-attention where each head maps a query and a set of key-value pairs to an output. The patch embeddings are used to obtain the query, key, and value based on linear transformations \(W^Q \in \mathbb{R}^{E \times D}, W^K \in \mathbb{R}^{E \times D}, \text{ and }, W^V \in \mathbb{R}^{E \times D}\). The output is a weighted sum of the values: \[ h^i(v) = \text{softmax}\left(\frac{W^Q v (W^K v)^T}{\sqrt{D}}\right) W^V v, \] \[ \text{Multi-head}(v) = \text{Concat}(h^1, \ldots, h^A) W^O, \] where \(W^O\) is a linear transformation for outputs, and \(A\) is the number of attention heads. We assume that the competition within the pair-wise attention of different patches would be of importance for the model to learn meaningful representations. If such competition exists, a trained model will naturally result in sparser interactions in attention heads. Therefore, we first performed an analysis of the operating modes of different attention heads in a pretrained ViT model by measuring the number of patches each head is attending to. We refer to Appendix A.4 for the detailed experimental settings. The inspection revealed the existing competition among patches and a large redundancy in the pair-wise attention. Less than 80% interactions were activated in ViT, and several heads from the middle layers used only 50% or less interactions with higher sparsity compared to the other layers. Based on the observation, by introducing a bottleneck that limits each attention head’s focus to foster competition, we obtain inductive biases for more efficient patch learning. 4 ASSOCIATIVE TRANSFORMER This section discusses the essential building blocks of the Associative Transformer (AiT), where patches compete to write into the shared workspace through bottleneck attention. The workspace enables an efficient information writing and reading mechanism by learning a set of priors in explicit memory. These priors are low-rank and learned progressively from the input through end-to-end training. The priors guide the bottleneck attention with an emerging specialization property. Moreover, we extend the learned priors to attractors within the associative memory of a Hopfield network, facilitating information retrieval from memory and efficient association of information fragments. 4.1 GLOBAL WORKSPACE LAYER We devise an associative memory-augmented attention layer called the global workspace layer, which comprises the squash layer, the bottleneck attention guided by low-rank memory, and the information retrieval within the associative memory of a Hopfield network (Figure 1). The global workspace layer can be seen as an add-on component on the monolithic Vision Transformer, where the feed-forward layers process patches before they enter the workspace, facilitating abstract relation learning, and the self-attention learns the contextual relations for a specific sample. The global workspace layer learns spatial relations across various samples and time steps. Figure 1: The scheme of the Associative Transformer. (a) In a global workspace layer, the input $\mathbb{R}^{B \times N \times E}$ is squashed into vectors $\mathbb{R}^{(B \times N) \times E}$. The squashed representations are projected to a low-rank latent space of dimension $D << E$ and then are sparsely selected and stored in the explicit memory via a fixed bottleneck $k << (B \times N)$. The Hopfield network utilizes the memory to reconstruct the input, where a learnable linear transformation (LT) scales the memory contents to match the input dimension $E$. (b) The Associative Transformer block consists of sequentially connected self attention, feed-forward layers, and the global workspace layer. **Squash layer** In self-attention, patches from the same sample are attended to. In our work, we improve the diversity in patch-wise correlation learning beyond one sample using a squash layer. The squash layer obtains patch representations from the entire training batch to enable competition among patches not only from the same sample but also from different samples. This differs from traditional approaches where the competition resides within specific samples. The squash layer concatenates patches within one batch $V \in \mathbb{R}^{B \times N \times E}$ into vectors $V \in \mathbb{R}^{(B \times N) \times E}$, which forms a list of patches regardless of the samples they are from. Though the number of patches changes in practice depending on the batch size, the communication bottleneck with a fixed capacity $k$ limits the number of patches the workspace can attend to at any given time. Since the bottleneck decreases the complexity from $O((B \times N)^2)$ to $O((B \times N) \times k)$, using the squash layer increases the diversity of input patches without adding to the complexity. With the greater diversity, a sample’s classification task, for instance, can benefit from other patches belonging to the same class within the batch input. **Low-rank explicit memory** An explicit memory bank with limited slots aims to learn $M$ priors $\gamma = \mathbb{R}^{M \times D}$ where $D$ is the dimension of the prior. The priors in the memory bank are used as various keys to compute the bottleneck attentions that extract different sets of patches from the squashed input. Furthermore, using low-rank priors reduces memory consumption, as a lower dimension $D << E$ is obtained through a down-scale linear transformation. ### 4.2 Bottleneck Attention with a Limited Capacity The objective of the bottleneck attention is to learn a set of priors that guide attention to various input patches. This is enabled by a cross-attention mechanism constrained by hard attention. We first consider a tailored cross-attention mechanism to update the memory bank based on the squashed input $\Xi^t = V^t \in \mathbb{R}^{(B \times N) \times E}$, then we discuss the case of limiting the capacity via a top-$k$ hard attention. Notably, in the cross-attention, the query is a function of the current memory content $\gamma^t = \{\gamma_i^t\}_{i=1}^M$. The key and value are functions of the squashed input $\Xi^t$. The attention scores for head $i$ can be computed by $A_i^t(\gamma^t, \Xi^t) = \text{softmax}\left(\frac{\gamma^t W_{Q,i}^T \Xi^t W_{K,i}}{\sqrt{D}}\right)$. This is the case of soft attention with limited constraints on the bottleneck capacity. Moreover, the hard attention allows patches to compete to enter the workspace through a $k$-size bottleneck, fostering the selection of essential patches. In particular, the top-$k$ patches with the highest attention scores from $A_i^t$ are selected to update the memory. To ensure a stable update across different time steps, we employ the layer normalization and the Exponentially Weighted Moving Average (EWMA) method as follows: $$\text{head}_i^t = \text{top-}k(A_i^t)\Xi^t W_Y^T, \quad \hat{\gamma}^t = \text{LN}(\text{Concat}(\text{head}_1^t, \ldots, \text{head}_A^t)WO),$$ $$\gamma^{t+1} = \alpha \cdot \gamma^t + (1 - \alpha) \cdot \hat{\gamma}^t, \quad \gamma^{t+1} = \frac{\gamma^{t+1}}{\sqrt{\sum_{j=1}^M (\gamma_j^{t+1})^2}},$$ (3) (4) where top-\(k\) selects the \(k\) highest attention scores, LN is the layer normalization, and \(\alpha\) is a smoothing factor determining the decay rate of older observations. EWMA ensures the stable memory update with varying batch sizes by accumulating both old \(\gamma^t\) and new memories \(\hat{\gamma}^t\). During the test time, the explicit memory is frozen, functioning as fixed priors, and any memory update from the bottleneck attention will not be retained (Figure 8). We only compute \(\gamma^{t+1}\) for the following pattern retrieval step in Hopfield networks for the current batch. To ensure a fair evaluation on the test dataset, the same explicit memory from the training time is utilized across all test batches. **Bottleneck attention balance loss** The bottleneck attention and the low-rank memory contribute to reduced model complexity of the global workspace layer. Nevertheless, employing multiple components cascaded in depth might lead to difficulty in the emergence of specialized priors in the explicit memory (Figure 9). To overcome this challenge, we propose the bottleneck attention balance loss to encourage the selection of diverse patches from different input positions. The bottleneck attention balance loss \(\ell_{\text{bottleneck}}\) comprises two components, i.e., the accumulative attention scores and the chosen instances for each input position. Then, we derive the normalized variances of the two metrics across different positions as follows \[ \ell_{\text{loads},i,l} = \sum_{j=1}^{M} (A_{i,j,l} > 0), \quad \ell_{\text{importance},i,l} = \sum_{j=1}^{M} A_{i,j,l}, \] \[ \ell_{\text{bottleneck},i} = \frac{\text{Var}(\{\ell_{\text{importance},i,l}\}_{l=1}^{B \times N})}{\left(\frac{1}{B \times N} \sum_{l=1}^{B \times N} \ell_{\text{importance},i,l}\right)^2 + \epsilon} + \frac{\text{Var}(\{\ell_{\text{loads},i,l}\}_{l=1}^{B \times N})}{\left(\frac{1}{B \times N} \sum_{l=1}^{B \times N} \ell_{\text{loads},i,l}\right)^2 + \epsilon}, \] where \(A_{i,j,l}\) denotes the attention score of the input position \(l\) for the \(j\)th memory slot of head \(i\), \(\ell_{\text{importance}}\) represents the accumulative attention scores for all \(M\) memory slots concerning each input position, \(\ell_{\text{loads}}\) represents the chosen instances for each input position in \(M\) memory slots, \(\text{Var}(\cdot)\) denotes the variance, and \(\epsilon\) is a small value to avoid division by zero. Finally, the loss scores for all the heads are summed up as follows: \(\ell_{\text{bottleneck}} = \sigma \cdot \sum_{i=1}^{A} \ell_{\text{bottleneck},i}\), where \(\sigma\) is a coefficient. ### 4.3 INFORMATION RETRIEVAL WITHIN ASSOCIATIVE MEMORY After writing information into the shared workspace, the learned priors can serve as attractors within associative memory. The objective is to reconstruct the current input patches towards more globally meaningful representations based on these attractors. **Attractors** Priors learned in the memory bank act as attractors in associative memory. Attractors have basins of attraction defined by an energy function. Any input state that enters an attractor’s basin of attraction will converge to that attractor. The attractors in associative memory usually have the same dimension as input states; however, the priors \(\gamma^{t+1}\) in the memory bank have a lower rank compared to the input. Therefore, we employ a learnable linear transformation \(f_{LT}(\cdot)\) to project the priors into a space of the same dimension, \(E\), as the input before using them as attractors. **Retrieval using the energy function in Hopfield networks** Hopfield networks have demonstrated their potential as a promising approach to constructing associative memory. In particular, a continuous Hopfield network [Demircigil et al., (2017); Ramsauer et al., (2021)] operates with continuous input and output values. The upscaled priors \(f_{LT}(\gamma^{t+1})\) are stored within the continuous Hopfield network and are subsequently retrieved to reconstruct the input state \(\Xi^t\). Depending on an inverse temperature variable \(\beta\), the reconstructed input \(\hat{\Xi}^t\) can be either a metastable state that represents a mixture of various attractors or a fixed state represented by one of the attractors. A large \(\beta\) makes it less likely for metastable states to appear, while a small \(\beta\) increases the likelihood. The continuous Hopfield network employs an energy function to enable the evolution of patches into more globally meaningful representations with respect to the learned attractors. We update each patch representation \(\xi^t \in \Xi^t\) by decreasing its energy \(E(\xi^t)\) within associative memory as follows \[ E(\xi^t) = -\text{lse}(\beta, f_{LT}(\gamma^{t+1})\xi^t) + \frac{1}{2} \xi^T \xi + \beta^{-1} \log M + \frac{1}{2} \zeta^2, \] \[ \zeta = \max_i |f_{LT}(\gamma^{t+1})|, \quad \hat{\xi}^t = \arg \min_{\xi^t} E(\xi^t), \] where \(\text{lse}\) is the log-sum-exp function and \(\zeta\) denotes the largest norm of attractors. Equation 7 describes an iteration that can be applied several times. Usually, we apply just a single step for efficient forward and backward computation during end-to-end training. \( t \) is the batch time step, and the iteration time step is implicit. Additionally, a skip connection functioning as the information broadcast from the global workspace is employed to obtain the final output \( \Xi^{t+1} = \Xi^t + \Xi^t \). 5 EXPERIMENTS In this section, we discuss the settings and extensive empirical results for image classification and relational reasoning tasks. Our study demonstrates that AiT outperforms the coordination method and other sparse attention-based approaches in terms of both performance and model complexity. 5.1 SETUP Datasets We evaluate model performance on two different scales of datasets (1) small (Triangle Goyal et al., 2022b), CIFAR10 Krizhevsky, 2009, and CIFAR100 Krizhevsky & Hinton, 2009) and (2) middle (Oxford-IIIT Pet Parkhi et al., 2012) and Sort-of-CLEVR Santoro et al., 2017). We train the model on these datasets from scratch using the training split and evaluate using the test split. A detailed description of the datasets can be found in Appendix A.1. Model variants We investigate three different sizes of model configurations, i.e., Small, Medium, and Base. The Base variant setting is adapted from Vision Transformer (ViT) using 12 layers, 12 attention heads for each layer, a hidden dimension of 768, and an MLP dimension of 3072. The Medium variant with 6 layers and the Small variant with 2 layers are added for efficiency comparisons among approaches. The CLS token is removed while the pooled representations of the last dense network layer are used instead since using the CLS token leads to undermined learning results in vision tasks Wang et al., 2021; Graham et al., 2021. Hyperparameters The hyperparameters were chosen based on a grid search. A batch size of 512 was employed for the CIFAR datasets and the Triangle dataset, 128 for the Pet dataset, and 64 for the Sort-of-CLEVR dataset. We utilized the AdamW optimizer with \( \beta_1 = 0.9, \beta_2 = 0.999 \), and a weight decay of 0.01. A cosine learning rate scheduler was implemented with an initial learning rate of 1e-5, a warm-up phase of 5 (15) epochs within a total of 100 (300) epochs, and a minimum learning rate set to 1e-6. The smoothing factor of the exponentially weighted moving average, the coefficient \( \sigma \), and the small value \( \epsilon \) in the bottleneck balance loss were set to 0.9, 1e-2, and 1e-10, respectively. For AiT, we employed a memory slot size of 32 and a bottleneck attention head size of 8. We used a bottleneck size of 512 for CIFAR and Pet, 64 for Triangle, and 256 for Relational Reasoning. We used 32 memory slots for CIFAR, Triangle, and Relational Reasoning, and 128 slots for Pet (Appendix A.3). Unless otherwise noted, we trained the model for 100 epochs and reported the mean of three individual experiments. The code will be made publicly available. 5.2 CLASSIFICATION TASKS The experiments on image classification tasks include comparisons to a wide range of methods (Table 1). We used the author-recommended hyperparameters to re-implement these methods. Regarding the coordination method, we have examined the efficacy of its variants with different model configurations. The default coordination model consists of 4 layers, with parameter sharing among Table 1: Performance comparison in image classification tasks | Methods | CIFAR10 | CIFAR100 | Triangle | Average | Model Size | |------------------|---------|----------|----------|---------|------------| | AiT-Base | 85.44 | 60.78 | 99.59 | 81.94 | 91.0 | | AiT-Medium | 84.59 | 60.58 | 99.57 | 81.58 | 45.9 | | AiT-Small | 83.34 | 56.30 | 99.47 | 79.70 | 15.8 | | Coordination | 73.31 | 43.90 | 91.66 | 70.29 | 2.2 | | Coordination-DH | 72.49 | 51.70 | 81.78 | 68.66 | 16.6 | | Coordination-D | 74.50 | 40.69 | 86.28 | 67.16 | 2.2 | | Coordination-H | 78.51 | 48.59 | 72.53 | 66.54 | 8.4 | | ViT-Base | 83.82 | 57.92 | 99.63 | 80.46 | 85.7 | | ViT-Small | 79.53 | 53.19 | 99.47 | 77.40 | 14.9 | | Perceiver | 82.52 | 52.64 | 96.78 | 77.31 | 44.9 | | Set Transformer | 73.42 | 40.19 | 60.31 | 57.97 | 2.2 | | BRIMS | 60.10 | 31.75 | - | 45.93 | 4.4 | | Luna | 47.86 | 23.38 | - | 35.62 | 77.6 | Figure 4: Model size vs. accuracy for configurations. different attention layers. Coordination-D is a deeper model with 8 layers using the parameter sharing. Coordination-H is a high-capacity model with 4 layers that employ individual parameters. Coordination-DH is a high-capacity model with 8 layers. The results show that AiT achieved better performance compared to the coordination methods. The AiT performance also increased when scaling it from AiT-Small to AiT-Base, while the coordination methods appeared difficult to scale with the increasing number of layers and parameters, as seen in the case of Coordination-DH. Moreover, AiT outperformed the other baseline methods, demonstrating strong performance. For instance, compared to ViT-Base with 85.7M parameters, AiT-Medium is a shallower model with only 45.9M parameters. Nevertheless, AiT-Medium exhibited an average performance of 81.58%, surpassing the ViT-Base model’s average of 80.46% and requiring much fewer parameters. AiT also outperformed sparse attention-based methods such as Perceiver and Set Transformer. We extended the evaluation to a middle-sized dataset of Oxford Pet. We used a patch size of 16. A larger memory of 128 slots was employed due to the higher resolution and the increased data class complexity. For the Oxford Pet dataset, we trained the model for 300 epochs. Figure 3 reveals that ViT performance can be enhanced by including the global workspace layer. AiT-Medium with fewer parameters also outperforms ViT-Base in the Pet dataset. Though AiT-Medium converges at a later training stage, it is a smaller model with fewer layers to compute compared to ViT-Base. Prior specialization Patches in one image can be attended sparsely by different priors. As shown in Section 3, a monolithic Transformer model needs to learn such specialization and relations without the inductive bias introduced by the global workspace layer. Notably, these priors learned to focus on independent spatial areas of an image to guide the attention. We visualized the activation maps for the specialized priors used in CIFAR-10 for AiT-Small (Figure 2). Each slot’s activation maps highlight specific areas during the selection of relevant patches. 5.3 ABLATION STUDY We conducted a comprehensive ablation study to gain insights into the functionalities of the various components of AiT (Table 2). In AiT with reset memory, we initialized the explicit memory every epoch. The W/O Hopfield ablation replaces the Hopfield network with another multi-head attention (MHA) that shares the same architecture as the self attention in Figure 1b. The rationale behind this ablation is grounded in the prior studies of Set Transformer and Perceiver models that relied on two MHA components cascaded in depth. For a fair comparison, instead of simply removing the Hopfield network, we replaced it with the MHA. The added MHA takes the input state $\Xi^t$ as the query, and the upscaled priors $f_{LT}(\gamma^{t+1})$ as the key and value, i.e., $\hat{\Xi}^t = \text{MHA}(\Xi^t, f_{LT}(\gamma^{t+1}))$. Moreover, W/O memory evaluates performance when the global workspace layer is removed, the remaining components of which are equivalent to a simple Vision Transformer. W/O bottleneck shows performance using dense attention by removing the top-$k$ bottleneck capacity constraint. W/O SA examines performance when the multi-head self attention component in Figure 1b is excluded, and W/O FF evaluates performance when the feedforward component is removed. Lastly, the dense networks consist of repeated feedforward components with the other components removed in each AiT block. The analysis suggests that the complete model with all components can achieve the highest classification accuracy. The bottleneck appeared to play a significant role in improving performance, since its absence led to an evident decrease in accuracy. Making changes to other components such as Hopfield networks and the explicit memory, while not as impactful, still resulted in degraded accuracy. Despite the relatively good performance of dense networks, their performance in relational reasoning tasks is considerably inferior to that of the AiT model (Section 5.8). We demonstrate the without memory forward ablation in Table 7 and Table 8. The results show that AiT performs as well as or better than the without memory forward ablation. Table 2: Comparison based on an ablation study. The results indicate that combining all the components leads to the highest performance in all the tasks. | Models | CIFAR10 | CIFAR100 | Triangle | Average | |--------------|---------|----------|----------|---------| | AiT | **83.34** | **56.30** | **99.47** | **79.70** | | Reset memory | 81.94 | 55.96 | 99.46 | 79.12 | | W/O Hopfield | 81.03 | 54.96 | 99.44 | 78.48 | | W/O memory (ViT) | 79.53 | 53.19 | 99.47 | 77.40 | | Dense networks | 77.78 | 53.14 | 99.46 | 76.79 | | W/O bottleneck | 75.40 | 46.53 | 93.33 | 73.75 | | W/O SA | 72.72 | 47.75 | 99.46 | 73.31 | | W/O FF | 69.51 | 40.89 | 97.61 | 69.34 | 5.4 COMPARISON WITH THE COORDINATION METHOD We performed a detailed comparison with the coordination method in terms of test accuracy and model size. Figure 4 depicts the results for CIFAR-10 based on models with a single layer. Notably, using the low-rank memory (LM) that has a more diverse set of priors showed benefits in both improving the performance and decreasing the model size. For instance, the baseline coordination (C) method exhibited moderate accuracy of 60.41% with a model size of 2.2M. In contrast, consolidating the low-rank memory and the self-attention (C+LM+SA) exhibited the highest accuracy of 71.62%, while maintaining a relatively compact size of 1.2M. The Hopfield network (HN) maintained the model performance while reducing the model size by replacing the cross-attention with more efficient information retrieval. However, HN was effective only when either the LM or SA component was applied. We assume that retrieval with the Hopfield associative memory relies on a diverse set of priors, which is enabled by the enhanced bottleneck attention using the low-rank memory and the attention balance loss, and the learning through self-attention. By contrast, the previous coordination method had a limited number of priors, e.g., 8, and did not employ self-attention to correlate among input patches. Moreover, integrating all three components (C+LM+HN+SA) resulted in a competitive accuracy of 71.49% with a compact model size of 1.0M. 5.5 MEMORY INITIALIZATION To initialize the explicit memory, we set each slot with values drawn from a specific distribution. We investigated several memory initialization methods (Table 3). The Gaussian distribution generates random values with a mean of 0 and a variance of 1. The sinusoidal positional embedding [Vaswani et al., 2017] uses sine and cosine functions to represent positions in a sequence. The uniform distribution [Graves et al., 2014] uses an upper bound $\frac{1}{\sqrt{M \cdot D}}$, where $M$ is the memory slot number and $D$ is the slot size. The identity distribution [Goyal et al., 2022a] uses ones on the diagonal and zeros elsewhere. We found that the Gaussian distribution resulted in the best performance, possibly by preventing specific priors from dominating the learning process in early training stages. 5.6 EFFICACY OF BOTTLENECK ATTENTION BALANCE LOSS The Bottleneck Attention Balance Loss facilitates selection of diverse input patches for each prior. To quantitatively measure the efficacy, we computed sparsity scores that represent the ratio of distinct patches in all selected patches. In Figure 10, we observe an apparent increase in the patch diversity. 5.7 VARYING THE INVERSE TEMPERATURE IN HOPFIELD NETWORKS We investigated the effect of the inverse temperature on information retrieval based on the Hopfield networks in Figure 5, which shows the reconstructed patches in the CIFAR-10 task for the AiT- Figure 5: Comparison with varying inverse temperature scores. The inverse temperature beta influences the formation of metastable states that concurrently represent multiple patch representations. A smaller beta is more likely to generate such metastable states, while a larger beta leads to a stronger separation of different patterns. However, a larger beta can also lead to local minima, where input patterns are reconstructed to the same pattern within associative memory. Small model. We found that using an inverse temperature of 1.0 gave the best retrieval performance based on the Hopfield networks. The results suggest that the beta parameter requires tuning to reach optimal performance. We aim to study a mechanism to adjust the beta adaptively in the future, addressing this sensitivity and potentially further improving performance. 5.8 Relational reasoning In relational reasoning tasks, we aim to train a model to answer questions concerning the properties and relations of various objects based on a given image. A performant model can attend to specific regions of images for the question-answering task. We employed the Sort-of-CLEVR dataset [Santoro et al., 2017] and compared performance to both Transformer based models including Set Transformer and the coordination method, and other non-Transformer based models including CNN+MLP and CNN+Relation Networks (CNN+RN) [Santoro et al., 2017]. The non-Transformer based models incorporated inductive biases into their architectures, such as convolutional layers focusing on different image areas. This often results in superior performance compared to the Transformer based methods that lack a built-in inductive bias. Moreover, two dense networks, the Dense-Small and Dense-Base, are included as additional non-Transformer based models. The Dense-Small (11.1M) and Dense-Base (62.7M) are derived from the AiT-Small and AiT-Base, respectively. Additionally, in relational reasoning tasks, a question was embedded with an embedding layer that consists of a learnable linear projection and layer normalization before and after the linear projection. The question embedding was then concatenated to image patch embeddings as the input of a model and the labels were a list of answer options with 10 classes. Table 4 presents the results for relational and non-relational tasks. In the non-relational task, the question pertains to the attributes of a specific object, whereas in the relational task, the question focuses on the relations between different objects. A description of the dataset can be found in Appendix A.1. The results demonstrate a substantial improvement in AiT’s performance when addressing the relational reasoning tasks. This indicates that the global workspace layer can learn spatial relations across different samples and time steps contributing to task performance. Dense networks generally do not perform well in the more complex relational reasoning tasks. 6 Conclusions We proposed the Associative Transformer (AiT), an architecture inspired by Global Workspace Theory and associative memory. AiT leverages a diverse set of priors with the emerging specialization property to enable enhanced association among representations via the Hopfield network. The comprehensive experiments demonstrate AiT’s efficacy compared to conventional models, including the coordination method. In the future, we aim to investigate multi-modal competition within the shared workspace, enabling tasks to benefit from the cross-modal learning of distinct perceptual inputs. REFERENCES E. Awh, E.K. Vogel, and S.-H. Oh. Interactions between attention and working memory. *Neuroscience*, 2006. Bernard J. Baars. *A Cognitive Theory of Consciousness*. Cambridge University Press, 1988. Jonathan Baxter. A model of inductive bias learning. In *J. Artif. Intell. Res.*, 2000. Rodney A. Brooks. Intelligence without representation. *Artif. Intell.*, 47(1-3):139–159, 1991. Changeux J. P. Dehaene S., Kerszberg M. A neuronal model of a global workspace in effortful cognitive tasks. In *National Academy of Sciences*, 1998. M Demircigil, J Heusel, M L”owe, S Upgang, and F Vermet. On a model of associative memory with huge storage capacity. *Journal of Statistical Physics*, 168(2):288–299, 2017. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, and et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*, 2021. Adam Gazzaley and Anna C. Nobre. Top-down modulation: bridging selective attention and working memory. *Trends in Cognitive Sciences*, 2011. Anirudh Goyal and Yoshua Bengio. Inductive biases for deep learning of higher-level cognition. *arXiv:2011.15091*, 2020a. Anirudh Goyal and Yoshua Bengio. Inductive biases for deep learning of higher-level cognition. *arXiv preprint:2011.15091*, 2020b. Anirudh Goyal, Aniket Rajiv Didolkar, Alex Lamb, and et al. Coordination among neural modules through a shared global workspace. In *ICLR*, 2022a. Anirudh Goyal, Aniket Rajiv Didolkar, Alex Lamb, and et al. Coordination among neural modules through a shared global workspace. In *ICLR*, 2022b. Benjamin Graham, Alaaeldin El-Nouby, Hugo Touvron, and et al. Levit: a vision transformer in convnet’s clothing for faster inference. In *ICCV*, 2021. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. *arXiv:1410.5401*, 2014. Klaus Greff, Sjoerd van Steenkiste, and Jürgen Schmidhuber. On the binding problem in artificial neural networks. *arXiv:2012.05208*, 2020. Çaglar Gülçehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. Dynamic neural turing machine with continuous and discrete addressing schemes. *Neural Comput.*, 30(4), 2018. Ankit Gupta and Jonathan Berant. GMAT: global memory augmentation for transformers. *arXiv:2006.03274*, 2020. Benjamin Hoover, Yuchen Liang, Bao Pham, and et al. Energy transformer. *arXiv:2302.07253*, 2023. John J. Hopfield. Hopfield network. *Scholarpedia*, 2(5):1977, 2007. Andrew Jaegle, Felix Gimeno, Andy Brock, and et al. Perceiver: General perception with iterative attention. In *ICML*, 2021. Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, and et al. Perceiver IO: A general architecture for structured inputs & outputs. In *ICLR*, 2022. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, and et al. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In *CVPR*, 2017. Arthur Juliani, Kai Arulkumaran, Shuntaro Sasai, and Ryota Kanai. On the link between conscious function and general intelligence in humans and machines. *Transactions on Machine Learning Research*, 2022.
NqpdT8DwGc
In the abstract the paper claims that these adversarial examples used to fingerprint a model can be used for a model extraction attack. However, if we only get the information which pre-trained model was used for fine-tuning, this is not a model extraction attack in my understanding.
Stealing the Invisible: Unveiling Pre-Trained CNN Models through Adversarial Examples and Timing Side-Channels Anonymous authors Paper under double-blind review Abstract Machine learning, with its myriad applications, has become an integral component of numerous technological systems. A common practice in this domain is the use of transfer learning, where a pre-trained model’s architecture, readily available to the public, is fine-tuned to suit specific tasks. As Machine Learning as a Service (MLaaS) platforms increasingly use pre-trained models in their backends, it’s crucial to safeguard these architectures and understand their vulnerabilities. In this work, we present an approach based on the observation that the classification patterns of adversarial images can be used as a means to steal the models. Furthermore, the adversarial image classifications in conjunction with timing side channels can lead to a model stealing method. Our approach, designed for typical user-level access in remote MLaaS environments exploits varying misclassifications of adversarial images across different models to fingerprint several renowned Convolutional Neural Network (CNN) and Vision Transformer (ViT) architectures. We utilize the profiling of remote model inference times to reduce the necessary adversarial images, subsequently decreasing the number of queries required. We have presented our results over 27 pre-trained models of different CNN and ViT architectures using CIFAR-10 dataset and demonstrate a high accuracy of 88.8% while keeping the query budget under 20. 1 Introduction The rapid growth of Machine Learning (ML) has transformed various industries. However, the complexity and resource intensity of developing in-house models have paved the way for Machine Learning as a Service (MLaaS) (Ribeiro et al., 2015). Companies like Google and Amazon provide businesses access to advanced, pre-trained ML/DL models via cloud services, eliminating the overhead of internal development and maintenance. However, the widespread use of MLaaS has amplified concerns around model privacy and security. These models, loaded with proprietary data and unique algorithms, are vital intellectual properties that offer competitive edge. In such a highly competitive environment, the security of these models is at risk due to the rise of techniques for reverse-engineering or “stealing” (Oliynyk et al., 2022). Increased research in model stealing poses a significant threat to the proprietary rights and market position of MLaaS providers. A query-based attack is a common method for model stealing, where adversaries use a model’s prediction Application Programming Interface (API) to recreate or “steal” (Oliynyk et al., 2022) it without direct access to its parameters or training data. Attackers generate a set of synthetic input samples or may already have access to data of similar distribution as that of the training data and send these to the model’s prediction API. Further, by analyzing the predictions, they attempt to reverse-engineer the model, often referred to as a black-box attack because the attacker has no knowledge of the model’s internal workings, but can only access its input/output interface. Numerous studies have proposed attacks on diverse ML and DL models across various modalities, including text (Krishna et al., 2020; Li et al., 2023; Pal et al., 2020), images, and graphs (Wu et al., 2022; He et al., 2021; DeFazio & Ramesh, 2019; Shen et al., 2022). In particular for attacks on image modality which is the focus of this paper, there are certain works which try to steal the target model’s complete architecture and parameters (Kariyappa et al., 2021; Rolnick & Kording, 2020; Roberts et al., 2019). On the other hand there are many works which create a substitute model by replicating the performance of the original target model (da Silva et al., 2018; Kariyappa et al., 2021; Li et al., 2018; Mosafi et al., 2019; Orekondy et al., 2019; Papernot et al., 2017; Yuan et al., 2022). Notably, the success and practicality of query-based attacks hinges on the query budget, which limits the number of queries one can make to an ML model in a set period to manage resources and enhance security. Hence for an attacker, reducing query numbers is vital, as excessive queries can raise alarms, leading to service suspension and a thwarted attack. In addition to query-based attacks, there exists another category of model stealing attacks that leverage side-channel or microarchitectural leakages to extract details about the model’s architecture and parameters. A substantial amount of research has focused on using side-channel information in remote settings to reverse-engineer the architecture and parameters of proprietary Deep Neural Networks (DNNs). Various studies have leveraged cache-based side-channels to recreate essential architectural secrets of the target DNN during the inference phase, using the Generalized Matrix Multiply (GEMM) operation in the DNN’s implementation (Hong et al., 2018; Yan et al., 2020). Cache memory access patterns have also been exploited to gain layer sequence information of CNN models and thereafter the complete architecture by utilizing LSTM-CTC model and GANs as well (Hu et al., 2020; Liu & Srivastava, 2020). Other investigations have exploited shared GPU resources, hardware performance counters, and GPU context-switch side-channels to extract the internal DNN architecture of the target (Naghibijouybari et al., 2018; Wei et al., 2020). Additionally, some researchers have shown that it is possible to steal critical parameters of the target DNN by exploiting rowhammer fault injections on DRAM modules (Rakin et al., 2021). Timing side-channels have been utilized to both build an optimal substitute architecture for the victim and extract DL models on high-performance edge deep learning processing units (Duddu et al., 2018; Batina et al., 2019; Won et al., 2021). Few works have also leveraged side-channel information like power (Wei et al., 2018; Yoshida et al., 2020), electromagnetic emanation (Batina et al., 2019; Yu et al., 2020; Chmielowski & Weissbart, 2021), and off-chip memory access (Hua et al., 2018) to reverse engineer architectural secrets of DNNs, which require physical access to the model. Companies that offer APIs for specialized applications, often use models fine-tuned from standard pre-trained models like AlexNet or ResNet that are available publicly. Some research works aim to identify these underlying pre-trained/teacher models in the backend of MLaaS platforms. One such work has proposed a query based model stealing attack that relies on analyzing the classification outputs of customized synthetic input images introduced to the model with a minimum requirement of at least 100 queries for good attack accuracy (Chen et al., 2022). From a side-channel perspective, a recent study employed GPU side-channels to identify pre-trained model architectures, but this approach used nvprof GPU profiler, which is mostly disabled on cloud platforms providing the MLaaS services (Weiss et al., 2023). In other work, user accessible CPU and GPU memory side-channel information were exploited to perform DNN fingerprinting on CPU-GPU based edge devices, but these won’t work in cloud settings where the client only gets an API response from the server and cannot access any other information about the system (Patwari et al., 2022). Our research is the first to combine both query based as well side-channel model stealing attack methodologies to determine the pre-trained models deployed in the backend of an MLaaS platform while only possessing client or user privileges and limiting the query requirements to less than 20 queries. With high influx of works on model stealing attacks, defending machine learning models against theft has also become of paramount importance. Various techniques, such as rate limiting and incorporating noise into the output predictions, have been devised to prevent these attacks. However, these strategies have their limitations and can impact the service utility for legitimate users. An emerging technique in the field of model IP protection is watermarking or fingerprinting models (Regazzoni et al., 2021; Lederer et al., 2023). Recently many works have also utilized adversarial examples for the same (Xue et al., 2021; Szyller et al., 2021; Zhao et al., 2020). This method works by embedding unique perturbations into the model during its training phase, which can be used as identifiers. These adversarial examples - inputs that are intentionally designed to induce model errors - act as the model’s unique fingerprints and can be used as markers of authenticity. However, the fact that adversarial examples can be used to fingerprint the ML models also poses the looming danger of being used as a means of model stealing. Adversarial example show an intriguing property of transferability (Liu et al., 2017), observed in machine learning models, particularly deep neural networks (DNNs). This property means that an adversarial example, originally designed for a specific machine learning model, can also affect other models, leading to successful misclassifications. In this work, we demonstrate and emphasize on the fact that, although adversarial examples can transfer between models, they may not necessarily be classified into the same class as the initial model due to difference in the decision boundaries of various models. We exploited these divergent misclassifications of adversarial images from different models to fingerprint several renowned pre-trained CNN architectures. We work with assumption that the adversary does not have any knowledge about target model’s architecture as well as weight parameters. For this, we utilize a window of top classifications from the MLaaS server. For each architecture we profile with multiple models of varying weight parameters to better classify the target model. It is to be noted, our work is the first one to exploit adversarial image classification pattern among various CNN architectures to reverse-engineer CNN models. Our work shows that an effective combination of adversarial image selection and timing based side-channels can be used to discern the target CNN models, with as little as 15 observations, thus reducing the query budget requirement for the attack. This strategy significantly helps in reducing our query requirements, allowing us to maintain it below ten queries for a successful attack. We have shown results for our attack with the standard CIFAR10 (Krizhevsky et al., 2009) dataset. Furthermore, we have worked with 27 pre-trained models of different CNN and ViT architectures provided by PyTorch (Paszke et al., 2019) and HuggingFace (Wolf et al., 2019) respectively. Next, we summarize the contribution of this work: • We observe that while transfer learning of CNN architectures allow adversarial attacks the target classes are distinct and can be used for fingerprinting. • We present a 2-staged model stealing attack, exploiting the remote inference timing side-channels in the first stage to shortlist potential architectures and prediction pattern of adversarial images in the second stage for final prediction. • We show through extensive experiments on 27 pretrained models available on PyTorch and HuggingFace using CIFAR10 dataset that our model stealing attack works accurately even in situations where the weights of the target model vary significantly compared to state-of-the-art. 2 RECOGNIZING ADVERSARIAL IMAGES AS ARCHITECTURE IDENTIFIERS We are aware of the transfer-ability property inherent in adversarial examples, whereby an adversarial image generated to induce misclassification in a particular Machine Learning (ML) model may also succeed in causing misclassification when presented as input to other ML models. We emphasize that while transfer-ability implies that the misclassification extends across models the resulting misclassified class is not the same. In this section, we delve into these misclassification patterns observed among various pre-trained, or teacher models and evaluate their ability of fingerprinting these models and subsequently exploit it to orchestrate a model extraction attack on Machine Learning as a Service (MLaaS) servers. A comprehensive exploration of these adversarial examples application is discussed in Sections 4. Experimental Setup: We perform our experiments using a total 27 pre-trained image classification models including both CNN and ViT architectures. We show the list of models in Table I, where we group the models under different architecture types. While the experiments were performed using PyTorch, it should be noted that they are not limited to this particular Deep Learning (DL) framework and can be replicated using alternative frameworks such as TensorFlow and Caffe. For the adversarial example generation we use the three well-known algorithms namely, Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD) and Basic Iterative Method (BIM). We use CIFAR-10 dataset for model training and adversarial examples generation. The models are finetuned for these datasets over the pre-trained models provided by PyTorch, which are originally trained on Imagenet-1k dataset. Consider a scenario where we have a set of models $M = \{M_1, M_2, \ldots, M_Z\}$. From this set, we select a single model, say $M_i$, $1 \leq i \leq Z$ as the base model to generate $N$ adversarial examples employing any of the recognized adversarial attack strategies. We then give these $N$ adversarial images as input to all the remaining $Z$ models and observe the classification pattern for each image. The primary objective is to determine whether these classifications are uniform across all models or whether they show variation. For our initial experiment, we have $Z = 27$, $M_i$ is Resnet-18 model and we generate $N = 1000$ adversarial images of CIFAR-10 dataset using FGSM, BIM and PGD adversarial attacks. In Figure I, we show classification for a sample of 5 adversarial images. Table 1: List of Models and their Groups | Group Name | Models | |------------|--------| | AlexNet | Krizhevsky et al. [2012] AlexNet | | VGG | Simonyan & Zisserman [2015] VGG-11, VGG-13, VGG-16, VGG-19 | | ResNet | He et al. [2016] Resnet-18, Resnet-34, Resnet-50, Resnet-101, Resnet-152 | | Squeezenet | Iandola et al. [2016] Squeezenet1.0, Squeezenet1.1 | | Densenet | Huang et al. [2017] Densenet-121, Densenet-161, Densenet-169, Densenet-201 | | Inception | Szegedy et al. [2016] Inception v3 | | GoogleNet | Szegedy et al. [2015] GoogleNet | | ShuffleNet | Ma et al. [2018] ShuffleNet V2 | | MobileNet | Sandler et al. [2018] MobileNet V2 | | ResNeXt | Xie et al. [2017] ResNeXt-50-32x4d, ResNeXt-101-32x8d | | Wide ResNet| Zagoruyko & Komodakis [2016] Wide ResNet-50-2, Wide ResNet-101-2 | | MNASNet | Tan et al. [2019] MNASNet 1.0 | | Google ViT | Wu et al. [2020] google/vit-base-patch16-224-in21k | | Microsoft Swin | Liu et al. [2021] microsoft/swin-base-patch4-window7-224 | Figure 1: Varying classification for 5 adversarial images generated using FGSM, PGD, and BIM attacks, belonging to 5 different classes of CIFAR-10 dataset for 27 pre-trained models (out of total 1000 images) from 5 different classes of CIFAR-10 dataset on 27 pre-trained CNN and ViT models. The classification for adversarial images generated using FGSM, PGD and BIM are shown in Figure 1a, Figure 1b, and Figure 1c respectively. For each adversarial image, we observe that the misclassified label by each model is not same and it varies for all the models. This trend is consistent for not only the five images depicted in Figure 1 but also for the entire set of 1000 adversarial images. Based on our observation, in Section 4 we demonstrate how we can leverage the unique misclassification trend among different pre-trained models to fingerprint them and then execute a successful model extraction attack. However, prior to that, we furnish the threat model in the following section. Figure 2: Classification of adversarial images generated using PGD with models of the same architectures but different weight parameters. 3 THREAT MODEL In this section we define the threat model for the proposed model extraction attack. We consider an MLaaS scenario, where a ML service provider provides API access to one of the trained ML model which has been deployed on the cloud server, to all the authorized clients. The adversary is also an authorized client of the service. The clients have no knowledge about the ML model’s architecture running on the server. The adversary does not have knowledge about the target model’s architecture, and further it does not have information about the weights as well. **Adversary’s capabilities:** Unlike other works (Weiss et al., 2023; Patwari et al., 2022) the adversary only has API access to the MLaaS model, through which it’s impossible to use any CPU and GPU profiling tools on the server. Additionally, the adversary has only client-level privileges, and can get the execution time of each image’s inference query to the model. The adversary has access to publicly available pre-trained models which he can fine-tune for particular datasets. The adversary has access to the dataset belonging to the same distribution as target model’s training data. **Adversary’s Objective:** The primary objective of the adversary is to discern the pre-trained model utilized in training the target model that operates on the MLaaS server. The adversary seeks to extract the model by analyzing the classifications of the input image and its corresponding inference time, and ensuring minimum possible queries to the MLaaS. 4 MODEL FINGERPRINTING USING ADVERSARIAL EXAMPLES AND TIMING SIDE-CHANNEL In this section, we extend the discussion from Section 2, where we highlighted the non-uniform classifications by pre-trained models on adversarial images. We illustrate how to identify a minimal subset of adversarial images that can effectively profile all the pre-trained models. Additionally, we demonstrate that by leveraging timing side-channels, we can further reduce the size of this minimal adversarial set. This is achieved by focusing on models whose inference time aligns closely with that of the target model operating on MLaaS server. 4.1 MODEL PROFILING WITH ADVERSARIAL IMAGES In Section 2, we showed varying classification patterns for different model architectures, but model for each architecture had a specific set of weights which in realistic scenario won’t be same even though the architecture remains same. This is because various possible initial parameters which are set before training may vary for different models. Thus, we work under the assumption of a weight-oblivious adversary, implying that we are unaware of the model architecture and its weights. For such an adversary, it is essential to create profiles for numerous models with the same architecture but with differing weight parameters. We have total of $Z$ different model architectures. We train $k$ models of each architecture with varying weights. We generate a set of $N$ adversarial images and generate classification for all $k$ models for each architecture. For our experiment, without loss of generality, we took $Z = 27$, $k = 10$ and $N = 1000$. In Figure 2, we show classification for 5 (out of 1000) adversarial images of CIFAR-10 dataset with 3 models, namely Alexnet, Resnet-18 and VGG11. It is visible from the figure that the classification for each image is not consistent across all models of the same architecture. Furthermore, where classifications appear consistent—for example, for the class 0 image—the results are comparable across all three architectures, making it difficult to distinguish between different architectures using such images. Consequently, we chose to evaluate the top-5 classifications of the model, rather than just the top-1. This approach provided us with deeper insights into the varied classification patterns of the different models. We show this by again taking the example of three architectures, Alexnet, Resnet-18, and VGG11. We trained 10 models of each architecture with different weight parameters. Next, for a particular adversarial image generated using a CIFAR-10 image with PGD, we collected top-5 classifications for all the 30 models. We then calculated class-wise probability means for each architecture. As a result, for every architecture, we had 10 mean values corresponding to the 10 CIFAR-10 classes. The final step is to calculate class-wise difference of means (DoMs) for each architecture pair, for --- 1 We choose top-5 classifications as it is a common practice in MLaaS environments. Figure 3: Comparison of Class-wise Difference of Means (DoMs) of classification probabilities between (a) Alexnet and Resnet18, (b) Alexnet and VGG11, and (c) Resnet18 and VGG11 with intra-architecture DoMs which we have provided three plots in Figure 3. The blue line plots are for inter-architecture DoMs. Subsequently, we also show plots for intra-architecture DoMs, for which we compare first 5 models of a given architecture with the other 5 models of the same architecture. From Figure 3, it is evident that this specific adversarial image serves well in distinguishing between Alexnet and Resnet-18 models, as well as Resnet-18 and VGG11 models. This is observable through the high DoMs for classes 0 and 3 in both the Alexnet/Resnet-18 pair and the Resnet-18/VGG11 pair. However, for the Alexnet and VGG11 pair, there isn’t a significant difference in DoMs when comparing inter and intra-architecture models. We now formulate our methodology to discern the target model’s architecture by utilizing top-5 classification information for some adversarial images. We have total of $Z$ architectures and $k$ models for all architectures with different weight parameters which were trained earlier. Let $I_x$ be an image from the adversarial image set $\{I_1, I_2, \ldots, I_N\}$. We first get classification for all $Z \times k$ models for the image $I_x$. For each model we get a vector of 5 probabilities for top-5 class labels. Next, we transform these vectors into vectors of size $|L|$, where $L$ is the set of class-labels for any chosen dataset. In each of this vector, we place the probabilities of top-5 classes at their index values, and all other values are set to zero. With these steps our template data for all models for a particular adversarial image $I_x$ is ready. Now, for an unknown target model we pass the image $I_x$ and get the top-5 classification. We convert the result to a vector of size $|L|$ as before. The next step is to discern the architecture of the target model, by comparing the target model’s generated classification vector with the prior created template. This prediction will be based on template created for one adversarial image. We select $W$ adversarial images and then create template for them. We perform majority voting on the results from template of each image. The next question that arises is how we do we decide on which images to choose for template creation and we delve into it in the next subsection. 4.1.1 Adversarial Image set selection for Template Creation So far, we have explored how to identify the target model by utilizing its classification of an adversarial image. The next question we address is how to determine the most effective adversarial image from a given set for this specific task. Furthermore, we need to decide the number of such images to be selected to ensure the best results with majority voting, while also optimizing this quantity to maintain the query budget. We have a set of $Z$ potential target architectures. For each of these $Z$ architectures $k$ models have been trained with different weight parameters. Furthermore, we have top-5 classification vectors transformed into vectors of size $|L|$ for these models on a set of $N$ adversarial images. Our objective is to identify the top $d$ images that shows the highest distinguishing ability among the $Z$ architectures. To achieve this, we first compute the element-wise mean of the classification vectors from the $k$ models for each adversarial image across all architectures. This results in $Z$ vectors of size $L$ for each adversarial image. We then calculate the Euclidean distance between each pair of architecture vectors for each adversarial image and sum these distances across all pairs. The adversarial images are ranked in descending order based on this sum. We have established a method to select the adversarial images which can be used to distinguish between various architectures, but the number of images which we require will depend on the total number or architectures $Z$. Finally, we select the top $d$ images, those that have the highest summed Euclidean distances, as these are the images that are most effective in distinguishing the different CNN architectures. We have elaborated our methodology in a systematic manner in Algo- Algorithm 1 Adversarial Image Selection for Model profiling in Black-box setup 1: \( N \leftarrow \) number of adversarial images 2: \( Z \leftarrow \) number of architectures 3: \( k \leftarrow \) number of models per architecture 4: \( L \leftarrow \) size of classification vector 5: \( d \leftarrow \) number of images to select 6: \( ED() \): Euclidean Distance calculation 7: Initialize \( V_{i,j,p} \) 8: for \( i = 1 \) to \( N \) do 9: for \( j = 1 \) to \( Z \) do 10: for \( p = 1 \) to \( k \) do 11: \( V_{i,j,p} \leftarrow \text{Classify}(i, A_j, p) \) 12: end for 13: end for 14: end for 15: Initialize \( D_i \) 16: for \( i = 1 \) to \( N \) do 17: for \( j = 1 \) to \( Z \) do 18: \( V_{i,j} \leftarrow \frac{1}{k} \sum_{p=1}^{k} V_{i,j,p} \) 19: end for 20: \( D_i \leftarrow \sum_{j=1}^{Z-1} \sum_{m=j+1}^{Z} ED(V_{i,j}, V_{i,m}) \) 21: end for 22: Sort \( D_i \) in descending order 23: return top \( d \) indices from \( D_i \) This is because to distinguish between a larger number of architectures, the requirement for the number of distinguishing images will also increase which in turn means higher query budget requirement. Also this will hamper the architecture predictability performance of our proposed approach. To address this we came up with a methodology which uses model inference times to first shortlist the potential target models, making \( Z \) smaller and then applying our adversarial image selection algorithm. We discuss this in detail in the next sub-section. 4.2 TIMING PROFILES The architecture of all vision models varies in terms of the number of layers, layer types, and other parameters. Primarily, the inference time of any CNN model depends on the network’s depth. This is equally applicable to publicly available pre-trained CNN and ViT models. In Figure 4, we display the box-plots representing timing distributions of 27 pre-trained models, including 25 CNN from PyTorch and 2 transformer models from HuggingFace, fine-tuned on the CIFAR-10 dataset. We use `perf_counter_ns` function from the `time` Python package to collect these timings. Each timing distribution is obtained by measuring the inference time of 100 images of differing classes. Each image is processed through the model 100 times, resulting in a total of 10,000 timing values for each distribution. The plot clearly shows that the inference times for all the 27 models vary significantly. We notice that some models have timing ranges that partially intersect with those of others. Thus, inference time alone cannot serve as a distinctive factor between the pre-trained models. However, we can use it as a criterion to filter the potential target models whose timing aligns closely with the target model’s timing. Subsequently, we can utilize Algorithm 1 from Section 4.1.1 to identify the minimum set of adversarial images, which can help recognize the target model among the shortlisted ones based on inference time. It is crucial to note that by trimming down the models to create a smaller pool of possible target models, which further helps in reducing the number of adversarial images required to distinguish between shortlisted architectures. This transition further aids in lowering the query budget for the model extraction attack. In Figure 4, we observe the maximum intersection in box-plots for 6 models namely, Resnet34, Resnet50, VGG11, VGG13, Inception-V3 and Resnext101-32-4d is still less than the total number of class labels in CIFAR-10 dataset. 4.2.1 SHORTLISTING MODELS GIVEN UNKNOWN TARGET MODEL’S INFERENCE TIME To begin with, we will gather timing traces for all the available pre-trained models and store their maximum and minimum values range as the timing profile for each model. For every model, we will collect the inference time for different class images from the dataset, then jointly calculate their maximum and minimum range, providing a complete timing range for all image types. Let us denote the number of models as $Z$. For any model $i$, the min-max timing range is defined as $(\text{MIN}_i, \text{MAX}_i)$. We now outline the procedure for narrowing down potential target models, given the inference time of the original unknown model. Consider an unknown target model, denoted as $X$. The inference time for this model is represented as $T_X$. We use $T_X$ as a basis to select models whose min-max range encompasses $T_X$. This procedure is formally outlined in Algorithm 2 (Appendix A). Once we have the set of potential target models, we can employ Algorithm 1 to get the minimum set of adversarial examples for the particular set of models. Then we pass the final selected images to the target model and discern its architecture based on the model’s classification outputs. The final prediction of the target model is determined through a process of majority voting. In this step, adversarial images chosen for a pre-selected group of architectures are inputted into the target model, which then yields the top-$n$ class predictions. For every prediction made from an adversarial image, we measure the Euclidean distance to all profiled model architectures and pinpoint the one with the closest proximity. In the final phase, a comprehensive majority voting is conducted, taking into account all the models predicted by the selected adversarial images. The outcome of this collective voting determines the final prediction. The final methodology is shown in Figure 5a. 5 EXPERIMENTAL RESULTS AND DISCUSSION In this section, we discuss the results for the model extraction methodologies explained in the previous section. We have performed our experiments using 27 pre-trained models including 25 CNN models provided by PyTorch via the torchvision.models package and 2 Vision Transformer models from HuggingFace. We further fine-tuned these models for CIFAR-10 dataset. All experiments throughout the paper have been performed on an Intel Xeon Silver 4214R CPU system with 128 GB RAM. We discuss our results with assumption of an adversary, who has no knowledge about the architecture as well as weight parameters of the target model. In this scenario, we would require multiple models to profile any particular architecture, and hence for each architecture we trained total of 15 models with varying weights for each architecture using the CIFAR-10 dataset. Among these 15 models we used 10 models for fingerprinting each architecture whereas we kept the 5 other models for testing purpose. It is to be noted that all the models have been fine-tuned on the pre-trained models, which means that the weights have been modified in all the layers of the model and not just the last layers as assumed in the prior work (Chen et al., 2022). The initial step involves profiling the inference time for each model across all architectures. We access all the model architectures remotely over an API call using the FLASK API setup. We discuss the results in detail in the next subsection. 5.1 Mapping Inference Time to Timing Profiles For every architecture, we record the inference times remotely through the API calls for all the trained models within that architecture and consolidate them. For our experiments, we have collected 10 timing traces for each model, thus accumulating a total of 100 timing values per architecture. For the target model, we collect 10 timing traces and then use Algorithm 2 to narrow down potential target models. To check the reliability of our approach we collected the timing traces for each of the 5 models of each architecture set apart earlier. In Figure 5b, we show the number of times the actual target architecture got shortlisted for all the test models. We observe that out of total 27 architectures, 25 architectures are shortlisted with 100% accuracy, whereas 2 architectures, VGG-16 and Shufflenet_V2 are correctly classified 4 out of 5 times. Overall 98.5% models are correctly shortlisted based on inference time. Now we move on to the next subsection, where we discern the target model from the shortlisted models using specifically chosen adversarial images. 5.2 Adversarial Image Set Selection We’ll now employ the approach defined in Section 4.1.1 to select the images which best help in distinguishing the group of shortlisted architectures for each of the 27 target architectures. We select these images utilizing adversarial image classifications from 10 models of varying weights from each architecture. Subsequently, we check the reliability of this approach using the test models set apart earlier for each architecture. We first select top 5 adversarial images suitable for distinguishing the shortlisted architectures using Algorithm 1. Then we apply the majority voting to get the final target model prediction. In Figure 6a, we show the majority voting scores for 5 test models of each architecture over different runs. Finally in Figure 6b, we show the number of correctly classified models out of the 5 test models for each target architecture after the majority voting. We observe that out of total 27 architectures 24 of them show 100% correct prediction for the 5 test models, whereas for Wide_resnet101_2 there are 4 correct predictions. Overall, we tested for $135 = 27 \times 5$ models of different architecture, and we get an average accuracy of 88.8% and maximum accuracy of 92.59% for correct predictions across different runs. 6 Conclusion In this study, we delved into the intriguing property of adversarial examples in machine learning models, with a focus on CNNs. We discovered that adversarial examples could influence the classification of various models, but don’t always trigger the same misclassification due to differing decision boundaries. Utilizing this, we developed a unique fingerprinting method for renowned pre-trained CNN and ViT architectures. Furthermore, we employed timing side-channels to minimize the number of adversarial image queries required for identifying the target model. This approach greatly reduced the queries needed, typically to fewer than 20. Moreover, we demonstrate that, despite fine-tuning all layers of the pre-trained model, we successfully beat the state-of-the-art work on model fingerprinting by correctly classifying 88.8% of models from varying architectures correctly. REFERENCES Lejla Batina, Shivam Bhasin, Dirmanto Jap, and Stjepan Picek. CSI NN: reverse engineering of neural network architectures through electromagnetic side channel. In *28th USENIX Security Symposium, USENIX Security 2019, Santa Clara, CA, USA, August 14-16, 2019*, pp. 515–532. USENIX Association, 2019. URL https://www.usenix.org/conference/usenixsecurity19/presentation/batina. Yufei Chen, Chao Shen, Cong Wang, and Yang Zhang. Teacher model fingerprinting attacks against transfer learning. In *31st USENIX Security Symposium, USENIX Security 2022, Boston, MA, USA, August 10-12, 2022*, pp. 3593–3610. USENIX Association, 2022. URL https://www.usenix.org/conference/usenixsecurity22/presentation/chen-yufei. Lukasz Chmielewski and Leo Weissbart. On reverse engineering neural network implementation on GPU. In *Applied Cryptography and Network Security Workshops - ACNS 2021 Satellite Workshops, AIBlock, AIHWS, AloTS, CIMSS, Cloud S&P, SCI, SecMT, and SiMLA, Kamakura, Japan, June 21-24, 2021, Proceedings*, volume 12809 of *Lecture Notes in Computer Science*, pp. 96–113. Springer, 2021. doi: 10.1007/978-3-030-81645-2_7. URL https://doi.org/10.1007/978-3-030-81645-2_7. Jacson Rodrigues Correia da Silva, Rodrigo Ferreira Berriel, Claudine Badue, Alberto Ferreira de Souza, and Thiago Oliveira-Santos. Copycat CNN: stealing knowledge by persuading confession with random non-labeled data. In *2018 International Joint Conference on Neural Networks, IJCNN 2018, Rio de Janeiro, Brazil, July 8-13, 2018*, pp. 1–8. IEEE, 2018. doi: 10.1109/IJCNN.2018.8489592. URL https://doi.org/10.1109/IJCNN.2018.8489592. David DeFazio and Arti Ramesh. Adversarial model extraction on graph neural networks. *CoRR*, abs/1912.07721, 2019. URL http://arxiv.org/abs/1912.07721. Vasisht Duddu, Debasis Samanta, D. Vijay Rao, and Valentina Emilia Balas. Stealing neural networks via timing side channels. *CoRR*, abs/1812.11720, 2018. URL http://arxiv.org/abs/1812.11720. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016*, pp. 770–778. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.90. URL https://doi.org/10.1109/CVPR.2016.90. Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. Stealing links from graph neural networks. In Michael Bailey and Rachel Greenstadt (eds.), *30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021*, pp. 2669–2686. USENIX Association, 2021. URL https://www.usenix.org/conference/usenixsecurity21/presentation/he-xinlei. Sanghyun Hong, Michael Davinroy, Yigitcan Kaya, Stuart Nevans Locke, Ian Rackow, Kevin Kulda, Dana Dachman-Soled, and Tudor Dumitras. Security analysis of deep neural networks operating in the presence of cache side-channel attacks. *CoRR*, abs/1810.03487, 2018. URL http://arxiv.org/abs/1810.03487. Xing Hu, Ling Liang, Shuangchen Li, Lei Deng, Pengfei Zuo, Yu Ji, Xinfeng Xie, Yufei Ding, Chang Liu, Timothy Sherwood, and Yuan Xie. Deepsniffer: A DNN model extraction framework based on learning architectural hints. In *ASPLOS ’20: Architectural Support for Programming Languages and Operating Systems, Lausanne, Switzerland, March 16-20, 2020*, pp. 385–399. ACM, 2020. doi: 10.1145/3373376.3378460. URL https://doi.org/10.1145/3373376.3378460. Weizhe Hua, Zhiru Zhang, and G. Edward Suh. Reverse engineering convolutional neural networks through side-channel information leaks. In *Proceedings of the 55th Annual Design Automation Conference, DAC 2018, San Francisco, CA, USA, June 24-29, 2018*, pp. 4:1–4:6. ACM, 2018. doi: 10.1145/3195970.3196105. URL https://doi.org/10.1145/3195970.3196105.
QhYNXVcZYz
In Table 2. you measure the reconstruction quality of the model - this is a very unfair competition to the rest of the methods - those methods do not have access to the original locations, so it seems a bit intuitive that your method will perform really well.
SketchEdit: Editing Freehand Sketches at the Stroke-Level Anonymous authors Paper under double-blind review Abstract Freehand sketching is a representation of human cognition of the real world. Recent sketch synthesis methods have demonstrated the capability of generating lifelike outcomes. However, these methods directly encode the whole sketch instances and make it challenging to decouple the strokes from the sketches and have difficulty in controlling local sketch synthesis, e.g., stroke editing. Besides, the sketch editing task encounters the issue of accurately positioning the edited strokes, because users may not be able to draw on the exact position and the same stroke may appear on various locations in different sketches. We propose SketchEdit to realize flexible editing of sketches at the stroke-level for the first time. To tackle the challenge of decoupling strokes, our SketchEdit divides a drawing sequence of a sketch into a series of strokes based on the pen state, aligns the stroke segments to have the same starting position, and learns the embeddings of every stroke by a proposed stroke encoder. This design allows users to conveniently select the strokes for editing at any locations. Moreover, we overcome the problem of stroke placement via a diffusion process, which progressively generates the locations for the strokes to be synthesized, using the stroke features as the guiding condition. Both the stroke embeddings and the generated locations are fed into a sequence decoder to synthesize the manipulated sketch. The stroke encoder and the sequence decoder are jointly pre-trained under the autoencoder paradigm, with an extra image decoder to learn the local structure of sketches. Experiments demonstrate that the SketchEdit is effective for stroke-level sketch editing and outperforms state-of-the-art methods in the sketch reconstruction task. 1 Introduction People may draw sketches to express their abstract concepts for the real world, and humans possess an extraordinary ability to create imaginative sketches. The objective of sketch synthesis is to mimic the human drawing process through machines, and the task is challenging due to the sketch abstraction, sparsity, and lack of details. Recently, efforts have been made to learn efficient sketch representations and generate realistic sketches, such as Sketch-RNN (Ha & Eck, 2017), SketchLattice (Qi et al., 2021), SketchHealer (Si et al., 2020) and SP-gra2seq (Zang et al., 2023b). However, whilst existing methods (Zang et al., 2021; 2023a; Wang et al., 2022) exhibit effective control on generating sketches with certain global property, they are unable to perform finer control on strokes. For example, researchers have focused on synthesizing sketches of particular categories, such as generating a “cat”, but have difficulty in manipulating the shape of certain parts (e.g., the body) of the “cat”. Moreover, for users who lack expertise, completing sketches in a single attempt is challenging, and the selected strokes may require multiple modifications. This paper attempts to present a model to mimic human sketch editing at the stroke-level as in Fig. 1. To achieve the stroke-level editing, it is a key obstacle to pinpoint the strokes that require editing. For the conventional method (Ha & Eck, 2017) using a sequence of points to represent sketches, although the segments determined by the pen states can be directly used as strokes, the lengths of the obtained strokes are not the same, which is not convenient for editing the strokes and updating the sketch sequence. Rasterizing a sketch into an image is a common operation in sketch studies (Chen et al., 2017; Yu et al., 2015; 2016). However, these image-based methods lost details of the drawing order and the way sketch are drawn, making it more difficult to get the stroke information. Recently, the work [Ou et al., 2023] provided an effective way to break down the sketch sequence into strokes for downstream tasks, where the stroke segments are padded to be of the same length. Inspired by this idea, we develop a stroke encoder to encode each stroke separately, without exchanging information with other stroke. This approach provides the flexibility to select strokes and edit them in the latent space of the encoder while minimizing the impact on the content of the rest part of the sketch. Another challenge for stroke-level editing is how to appropriately place the strokes after the editing is done. As given in the second row of Fig. 1, if we replace the cat’s body with the sheep’s body, the cat’s head moves from the right to the left side of the image. If the cat’s head is still in its original position, the generated sketch will be unrealistic. Here, we develop a diffusion model [Ho et al., 2020] for accurate stroke placement. The diffusion model generates the stroke locations progressively through the denoising process, based on the features of all strokes to be synthesized. The diffusion model extends beyond the generation of single-category sketches, enabling the creation of more diverse results, e.g., a pig with wing-like ears. Furthermore, we fuse the stroke embeddings with the generated stroke locations, and devise a sequence decoder to synthesize the final manipulated sketch. The stroke encoder and the sequence decoder are jointly pre-trained under the autoencoder paradigm, with an extra image decoder to learn the local structure of sketches. In summary, we propose a novel sketch editing method called SketchEdit and our contributions are as follows: (i) We develop the traditional task of sketch synthesis into a more controllable sketch editing task at the stroke-level for the first time. The proposed SketchEdit achieves this purpose well and enables the generation of creative sketches. (ii) We present a fresh perspective on the placement of sketch strokes, where strokes are synthesized akin to assembling building blocks. Given a set of base strokes, we first generate meaningful placements for them, and then combine the strokes into a meaningful sketch. (iii) Experiments show that our method performs significantly better than the state-of-the-art sketch generation models for the task of sketch reconstruction. This guarantees that the edited sketch effectively retains the visual properties of the original sketch. 2 RELATED WORK Sketch generation. Sketching, as a practical communication tool and medium for emotional expression, is impressive and expressive. Its related generative tasks have attracted the interest of researchers [Zhou et al., 2018; Das et al., 2021; Ge et al., 2020]. An essential work to this is Sketch-RNN [Ha & Eck, 2017], which is facilitating research into deep learning for the imitation of human drawing. The Sketch-RNN is comprised of a bidirectional Long Short-Term Memory (LSTM) [Hochreiter & Schmidhuber, 1997] encoder and a unidirectional LSTM decoder. Although Sketch-RNN is capable of accurately capturing the connection between drawing points, it falls short in perceiving the local structural information of images. Therefore, the subsequent methods [Chen et al., 2017; Song et al., 2018] convert the sequence of sketches into rasterized images and introduce Convolutional Neural Networks (CNNs) [LeCun et al., 1998] as a replacement or supplement to the LSTM encoder. To improve the representational capabilities of the models, graph neural networks (GNNs) [Scarselli et al., 2008] are introduced on top of the image representation [Su et al., 2020; Qi et al., 2022; 2021; Zang et al., 2023b]. These methods construct graphs by temporal proximity, spatial proximity or synonymous proximity. Another method to improve performance is to use a Gaussian Mixture Model (GMM) to model the latent space and incorporate Rival Penalized Competitive Learning (RPCL) [Xu et al., 1993] to automatically select the number of Gaussians [Zang et al., 2021; 2023a]. However, as mentioned before, previous sketch generation models have struggled to decouple specific strokes, so the proposed SketchEdit takes strokes as input rather than images or drawing points. Figure 2: The overview of the proposed SketchEdit. (a) Denoising process conditional on strokes. Essentially, the goal is to reorganize strokes with confusing positions into meaningful sketches. (b) The pipeline for sketch editing by our method. The edited strokes are replaced at the input (or in the latent space) against the target strokes, and then the inverse denoising process is used to obtain meaningful stroke positions from the random noise. (c) Pre-training the stroke encoder and the sequence decoder which are used to generate stroke embeddings and synthesis target sketches for sketch editing task. Diffusion models. Diffusion models (Sohl-Dickstein et al., 2015) have led to a boom in research, particularly in the field of image synthesis (Ho et al., 2020; Dhariwal & Nichol, 2021). Text-to-image (T2I) generation is a widely recognized application of diffusion models, which enables the rapid generation of artwork by providing prompts as a cue to large models, such as DALL-E2 (Ramesh et al., 2021), Imagen (Saharia et al., 2022), GLIDE (Nichol et al., 2021) and stable diffusion (Rombach et al., 2022). However, certain information remains difficult to convey solely through text, leading to the emergence of visual cues as conditions for diffusion models. Sketches are an effective tool for responding to structural information and are therefore regarded as control conditions by PITI (Voynov et al., 2023), ControlNet (Zhang & Agrawala, 2023), UniControl (Qin et al., 2023), T2I-Adapter (Mou et al., 2023), and other methods. Recently some diffusion models (Wang et al., 2022; Das et al., 2023) about sketches have been proposed, which focus on modeling the points of the sketch rather than the stroke locations. Different from the methods mentioned above, where the forward process and reverse denoising process are conducted on images, they consider the points in the sketch sequence as targets. The feasibility of this idea was verified experimentally. Inspired by their research, this paper investigates the potential use of a diffusion model to model stroke locations. 3 METHODOLOGY SketchEdit is constructed based on diffusion model to edit sketches at the stroke-level. The key step is to predict the locations of the strokes. This is achieved by the reverse denoising process of the diffusion model conditioned on stroke embeddings, as shown in Fig. 2(a). The SketchEdit decouples sketch into several strokes without position information, allowing the user to conveniently select strokes for editing. Strokes and generated locations are eventually fed into a sequence decoder to synthesis the edited sketch. The pipeline of editing sketches are illustrated in Fig. 2(b). 3.1 Sketch representation A sketch is represented by a sequence of $L_p$ points, i.e., $\tau = (p_1, p_2, ..., p_{L_p})$. Each point $p_i$ is a vector containing five elements. The first two are the coordinates of the absolute position, while the last three uses the one-hot vector format to represent the three pen states of lift, touch, and the end of sketch. To proceeds in the stroke-level, the sketch sequence is broken down into a series of strokes, i.e., $(s_1, s_2, ..., s_{L_s})$, where $L_s$ denotes the number of strokes. We use $(x, y) = [(x_1, y_1), (x_2, y_2), \ldots, (x_{L_s}, y_{L_s})]$ to record the locations of the strokes, which are the coordinates of the first point of the stroke. In this paper, we also define the normalized stroke sequence $\tilde{s}_i$ by subtracting the location $(x_i, y_i)$ of the stroke from the coordinates of all the points in the stroke. 3.2 Diffusion model for forecasting locations **Forward process.** Given a set of stroke locations $(x, y)_0 \sim q((x, y)_0)$, we apply the Markov diffusion process in DDPMs (Ho et al., 2020) here. The noise sampled from Gaussian distribution is gradually added to $x$ and $y$: $$q((x, y)_{1:T}|(x, y)_0) = q((x, y)_0) \prod_{t=1}^{T} q((x, y)_t|(x, y)_{t-1}),$$ $$q((x, y)_t|(x, y)_{t-1}) = N((x, y)_t; \sqrt{1 - \beta_t}(x, y)_{t-1}, \beta_t I),$$ where $\beta_t \in (0, 1)$ represents the noise schedule at time $t$. **Reverse process.** The reverse process aims to recreate the true locations from a Gaussian noise input $(x, y)_T$. Similar with the DDPMs (Ho et al., 2020), A U-Net (Ronneberger et al., 2015) like network is utilized to predict the noise $\epsilon_\theta((x, y)_t, t)$. However, stroke locations have no explicit semantic information, so it is necessary to introduce strokes as a condition. Thus, the network for predicting noise is modified to $\epsilon_\theta((x, y)_t, t, \tilde{s})$. To decrease computational complexity and leverage high-level semantic information, as illustrated in Fig. 2, we utilize the stroke embeddings $\tilde{z}$ as the condition rather than the strokes $\tilde{s}$. The reverse denoising process can be formalized as: $$p_\theta((x, y)_{t-1}|(x, y)_t, \tilde{z}) = N((x, y)_{t-1}; \mu_\theta((x, y)_t, t, \tilde{z}), \sigma^2_t I),$$ $$\mu_\theta((x, y)_t, t, \tilde{z}) = \frac{1}{\alpha_t} ((x, y)_t - \frac{\beta_t}{\sqrt{1 - \bar{\alpha}_t}} \epsilon_\theta((x, y)_t, t, \tilde{z})), \quad (2)$$ where $\alpha_t = 1 - \beta_t$ and $\bar{\alpha}_t = \prod_{i=1}^{t} \alpha_i$. In practice, we use the DDIM-based generation process for accelerated sampling. 3.3 Editing freehand sketches at the stroke-level In this subsection, we provide the process of editing sketch at the stroke-level. First, pick the to be edited stroke $\tilde{s}_i$ from the sketch $\tau$. The edited stroke $\tilde{s}_i$ can either be drawn by the user or selected from the stroke gallery to replace $\tilde{s}_i$. Taking the angle shown in Fig. 2(b) as an example, we have obtained the strokes $\tilde{s}(\tilde{s}_1, \tilde{s}_2, \tilde{s}_3, \tilde{s}_4, \tilde{s}_5)$ after editing. Then, the stroke encoder calculates the stroke embeddings $\tilde{z}(\tilde{z}_1, \tilde{z}_2, \tilde{z}_3, \tilde{z}_4, \tilde{z}_5)$. As the encoding process does not involve the exchange of stroke information, stroke substitution in the latent space, such as replacing $\tilde{z}_4$ with $\tilde{z}_4$, is also possible. Next, we apply the reverse process of diffusion model to denoise random noise $(\hat{x}, \hat{y})_T$ conditional on $\tilde{z}$, resulting generated stroke locations $(\hat{x}, \hat{y})_0$. Finally, the stroke embeddings $\tilde{z}$ and the stroke locations $(\hat{x}, \hat{y})_0$ are fed into the token mixture block and sequence decoder to synthesis the edited sketch $\tau$. 3.4 Construct the Stroke Encoder and the Sequence Decoder After converting the sketch sequence to the normalized stroke representation, the resulting tensor \( \tilde{s} \in \mathbb{R}^{L_s \times L_n \times 5} \) is obtained, where \( L_n \) is the number of points in a stroke. A position-sensitive block must act as the backbone of the stroke encoder to extract features from \( \tilde{s} \). Because significant changes in the shape of the stroke occur when any two points in the sequence are interchanged. Token-based MLPs (Tolstikhin et al., 2021) fulfill this requirement, and thus we consider gMLP (Liu et al., 2021) as the basic component. Since we do not wish for any exchange of information to occur during the encoding stage between the strokes, we can intuitively treat the first dimension of \( \tilde{s} \) as the batch size. Several layers are used to extract the stroke embeddings \( \tilde{z} \). Firstly, each point in a stroke is treated as a token, which then interacts through the network with other points. Next, these tokens are summed for aggregation to get \( \tilde{z}_{\text{enc}} \in \mathbb{R}^{L_s \times d_{\text{model1}}} \), where \( d_{\text{model1}} \) denotes the dimension of the tokens. The stroke embeddings \( \tilde{z} \in \mathbb{R}^{L_s \times d_{\text{model2}}} \) are calculated as follows: \[ \begin{align*} \tilde{\mu}, \tilde{\sigma} &= f_{\text{linear}}(\tilde{z}_{\text{enc}}), \tilde{\mu}, \tilde{\sigma} \in \mathbb{R}^{L_s \times d_{\text{model2}}}, \\ \tilde{z} &= \tilde{\mu} + \tilde{\sigma} \times \epsilon_{\text{enc}}, \epsilon_{\text{enc}} \sim \mathcal{G}(0, I), \end{align*} \] where \( f_{\text{linear}}(\cdot) \) and \( d_{\text{model2}} \) represents a linear projection and the dimension of stroke embeddings, respectively. The reparameterization trick (Kingma & Welling, 2013) employed in Equation 3 serves to effectively constrain the latent space, resulting in improved continuity. Then, we map the stroke locations \((x, y) \in \mathbb{R}^{L_s \times 2}\) to the location embeddings \( z_{\text{loc}} \in \mathbb{R}^{L_s \times d_{\text{model2}}} \). The summation of \( \tilde{z} \) and \( z_{\text{loc}} \) is fed into a token mixture block to mixture the information of different strokes. The resulting \( z_{\text{mix}} \in \mathbb{R}^{L_s \times d_{\text{model2}}} \) is subsequently sent to both the sequence decoder and the image decoder. The decoders utilize spatial projection to increase the number of tokens before reconstructing either the sequence \( \tau(\tilde{p}_1, \tilde{p}_2, ..., \tilde{p}_n) \) or the image \( \tilde{I} \). The backbone of the proposed token mixing block and sequence decoder is gMLP, while the image decoder is built based on CNNs. Thanks to the powerful global capture capability of gMLP, we can decode all sequence points simultaneously, rather than using the autoregressive approach (Ha & Eck, 2017; Chen et al., 2017; Su et al., 2020). This still result in good reconstruction outcomes. 3.5 Two-stage Training Pre-train the stroke encoder and the sequence decoder. After completing end-to-end training, the stroke encoder and the sequence decoder can effectively reconstruct sketches. There are three training objectives. The first is for the output of the sequence decoder, where our goal is to minimize the negative log-likelihood function of the generated probability distribution: \[ L_{\text{seq}} = -\mathbb{E}_{u_\phi(\tilde{s})} \log v_\xi(\tau|\tilde{z}, (x, y)). \] The training goal in Sketch-RNN (Ha & Eck, 2017) also pursues this aim, with the variance being the absolute or relative coordinates modeling. For calculating the image reconstruction loss \( L_{\text{img}} \), we utilize the traditional mean square error (MSE). To improve the representational power of the model (Zang et al., 2021, 2023a), GMM modeling is carried out in the encoder’s latent space. We initialize \( K \) Gaussian components and the appropriate number is determined automatically with the aid of RPCL (Xu et al., 1993). The corresponding loss function is formalized as follows: \[ L_{\text{GMM}} = \sum_{i=1}^{L_s} KL(u_\phi(\tilde{z}_i, k|\tilde{s}_i)||o_\psi(\tilde{z}_i, k)), \] where \( \tilde{z}_i \) is the stroke embedding correspond to the stroke \( s_i \), and the KL term is calculated as in (Jiang et al., 2016). The parameters of the GMM are learned by an EM-like algorithm, details of which can be found in (Zang et al., 2021). In summary, the overall objective is: \[ L_{\text{AE}} = L_{\text{seq}} + L_{\text{img}} + \lambda L_{\text{GMM}}, \] where \( \lambda \) is a hyperparameter and we set it to 0.0001 in practice. Train the diffusion model. In this stage, the previously trained parameters of the stroke encoder and the sequence decoder are fixed, and the following are the training objectives of the diffusion model: \[ \min_\theta \mathbb{E}[\epsilon - \epsilon_\theta((x, y)_t, t, \tilde{z})]_2^2. \] 4 EXPERIMENT 4.1 PREPARATION Dataset. Two datasets are selected from the largest sketch dataset QuickDraw (Ha & Eck [2017]) for experiments. DS1 is a 17-category dataset (Su et al., 2020; Qi et al., 2022). The specific categories are: airplane, angel, alarm clock, apple, butterfly, belt, bus, cake, cat, clock, eye, fish, pig, sheep, spider, umbrella, the Great Wall of China. These categories are common in life and the instances in the categories are globally similar in appearance. DS2 (Zang et al., 2021) is a widely used, comparatively small dataset for synthesized sketches, comprising five categories: bee, bus, flower, giraffe, and pig. Each category contains 70000 sketches for training and 2500 sketches for testing. Implement Details. The AdamW optimizer (Loshchilov & Hutter [2017]) is applied to train the proposed model with parameters $\beta_1 = 0.9$, $\beta_2 = 0.999$, $\epsilon = 10^{-8}$ and weight decay $= 0.01$. We use the CosineAnnealingLR scheduler (Smith & Topin [2019]) with the peak learning rates are 0.002 and 0.0005 for the pre-trained model and the diffusion model, respectively. All the sketch is padded to the same length, i.e. $L_p = 180$. Each sketch is broken down into $L_s = 25$ strokes and each stroke contains 96 points. The method is implemented by pytorch and trained on 5 RTX 2080Ti GPUs. For the pre-trained network, we train it with 15 epochs and the batch size is 200. There are 8 gMLP blocks in the stroke encoder with $d_{model1} = 96$ and $d_{ffn1} = 384$. The token mixture block and the sequence decoder includes 2 and 12 gMLP blocks, respectively. We set $d_{model1} = 128$ and $d_{ffn1} = 512$ for these blocks. Drop path rate is set to 0.1. We train the U-Net of the diffusion model with 40 epochs with the batch size is 768. The encoder and the decoder both consist of 12 gMLP blocks with the drop path rate is 0.1. The $d_{model}$ and $d_{ffn}$ in these blocks are 96 and 384, respectively. For the forward process and the reverse denoising process, we set the time step $T = 1000$. We consider the linear noise schedule for the model with $\beta_1 = 0.0001$ and $\beta_T = 0.02$. We take 60 steps for DDIM sampling in default. Competitors. We consider 3 types of models as the competitors for sketch reconstruction. Sketch-RNN (Ha & Eck [2017]) employs a VAE (Kingma & Welling [2013]) framework to learn sketch representations from sequences. Sketch-pix2seq (Chen et al., 2017) takes sketch images as input to learn local structural information form sketches. RPCL-pix2seq (Zang et al., 2021) develops the decoder of Sketch-pix2seq into a dual-branch architecture and constrain the code with GMM. Based on the rasterized sketch images, SketchHealer (Su et al., 2020), SketchLattice (Qi et al., 2021), and SP-gra2seq (Zang et al., 2023b) introduce the GNNs for better representations. The graphs are constructed based on time, position, and synonymous proximity, respectively. Metrics. To evaluate the performance of the SketchEdit, we select Rec (Zang et al., 2021), FID (Heusel et al., 2017), LPIPS (Zhang et al., 2018), and CLIP Score (Radford et al., 2021; Hessel et al., 2021) as the metrics. To classify whether the recreated sketches are belongs to the original category, two sketch-a-nets (Yu et al., 2015) are trained on DS1 and DS2, respectively. Rec is the success rate of recognition. 4.2 EDITING SKETCHES AT THE STROKE-LEVEL. Stroke-level sketch editing involves modifying distinct strokes while minimizing the impact on the overall structure. Sketches typically consist of various basic shapes, and strokes from other sketches can be conveniently reused to edit the intended sketch, as illustrated in Fig. 3. The recycled shapes may comprise constituents from the identical class with clearly defined meanings, for example, an airplane fuselage, an umbrella handle, and so on. Using interpolation techniques to generate additional components with uniform semantics can efficiently produce a substantial amount of novel sketches. Apart from that, creative editing enables a sensible synthesis of strokes from different categories of sketches. Some examples are provided in Fig. 3; for instance, the alarm clock’s bells have been replaced by apple stems, and the SketchEdit has found a “logical” place for the apple stem. Although our method is flexible when it comes to editing strokes, identifying appropriate metrics for evaluation remains challenging. Therefore, we employ an intermediate task for evaluation. Initially, we utilize the diffusion model for position locations based on normalized strokes. Then sketches are synthesised with the generated locations. Subsequently, Table 1 reports the experimental results for this intermediate task. Compared to the performance of recreating sketches with the original loca- Table 1: The performance for recreating sketches from normalized strokes with unknown locations. Diffusion models are involved in the prediction of locations. SketchEdit(o.l) denotes recreating sketches with the original locations but not the generated locations. | Model | DSI Rec(↑) | FID(↓) | LPIPS(↓) | CLIP-S(↑) | DS2 Rec(↑) | FID(↓) | LPIPS(↓) | CLIP-S(↑) | |----------------|------------|--------|----------|-----------|------------|--------|----------|-----------| | SketchEdit | 78.79% | 3.79 | 0.29 | 94.59 | 85.89% | 7.77 | 0.37 | 91.96 | | SketchEdit(o.l)| 84.32% | 3.12 | 0.11 | 96.73 | 93.42% | 5.88 | 0.19 | 94.25 | Figure 3: Exemplary sketch editing results. Boxes of the same color in each row denote the respective modified strokes. Creative sketches can be generated through the interpolation between strokes in the latent space and the locations of strokes are produced by our diffusion model. Figure 4: Examples of issues caused by the utilization of generated locations in sketch reconstruction. (a) Some of the components have moved. (b) The sketch category changes. (c) The results synthesised are meaningless. tions, the results with the generated ones experience a significant decrease, especially in the LPIPS metric. There are three primary factors contributing to the decline in semantic similarity between the recreated sketches and their corresponding sketches, as shown in Fig. 4. Reasonable movement of components and generation of different classes of sketches is tolerated because our diffusion model does not provide more guidance just to get appropriate stroke positions. An important reason for the generation of meaningless sketches is that the target sketches are rarer patterns, and our diffusion model struggles to accurately predict stroke locations against them. 4.3 Comparison With State-of-the-Arts For Sketch Reconstruction Sketch reconstitution requires the model to recreate the sketch $\tilde{\tau}$ from the input $\tau$. High-quality sketch reconstruction is essential to maintaining a consistent visual appearance between the edited sketch and the original sketch. In this subsection, we compare the SketchEdit with other sketch synthesis methods. For a fair comparison, our model uses the original stroke locations instead of the generated ones. **Qualitative analysis.** Table 2 reports the sketch reconstruction performance of the proposed method and its competitors. Our model significantly outperforms other methods across all metrics. The SketchEdit model captures global dependencies in sketch sequences more efficiently, while the proposed sequence decoder addresses the challenge of stacked layers in LSTM and the deeper network improves reconstruction results. However, due to the data-driven nature of the gMLP block, it lacks adequate inductive bias, resulting in a less prominent advantage of SketchEdit on the smaller DS2 compared to DS1. For Sketch-RNN (Ha & Eck(2017), the FID metrics and other metrics present a distinct phenomenon. The inputs for the Sketch-RNN and the SketchEdit consist of sketch sequences or strokes, without requiring the sketches to be rasterized into images. There exists a considerable domain gap between the sequences and the images, resulting in a disparity between the distributions learned by the image-based approach and the sequence sketches. ![Figure 5](image) The exemplary result of reconstructed sketches by the proposed SketchEdit and other models. The categories from left to right are alarm clock, butterfly, belt, cake, cat, sheep, spider and the Great Wall of China. | Model | Rec($\uparrow$) | FID($\downarrow$) | LPIPS($\downarrow$) | CLIP-S($\uparrow$) | Rec($\uparrow$) | FID($\downarrow$) | LPIPS($\downarrow$) | CLIP-S($\uparrow$) | |----------------|-----------------|-------------------|---------------------|--------------------|-----------------|-------------------|---------------------|--------------------| | Sketch-RNN | 64.51% | 6.87 | 0.33 | 91.82 | 77.74% | 10.45 | 0.40 | 90.29 | | Sketch-pix2seq | 66.99% | 42.03 | 0.34 | 90.04 | 88.36% | 42.78 | 0.37 | 90.22 | | RPCL-pix2seq | 69.86% | 44.09 | 0.32 | 90.37 | 90.66% | 27.32 | 0.35 | 90.80 | | SketchLattice | 48.88% | 48.70 | 0.44 | 87.06 | 77.54% | 50.92 | 0.45 | 87.80 | | SketchHealer | 76.76% | 21.62 | 0.32 | 92.15 | 90.93% | 24.43 | 0.36 | 91.28 | | SP-gra2seq | 76.60% | 21.92 | 0.33 | 92.01 | 91.12% | 21.69 | 0.37 | 91.15 | | SketchEdit | **84.32%** | **3.12** | **0.11** | **96.73** | **93.42%** | **5.88** | **0.19** | **94.25** | **Quantitative analysis.** Fig. 5 presents the qualitative comparisons. Compared to other approaches, SketchEdit is capable of reconstructing sketches with high-quality, without introducing additional noisy strokes, while preserving the structural patterns of the sketches. To prevent generated sketches from changing category, the model must first learn an accurate representation of the category-level. A failure case is that Sketch-pix2seq reconstructs the last column of the Great Wall into a belt. Capturing structural information at the instance-level is a challenging undertaking. While nearly all the competitors reproduced “cakes” as “cakes”, the generated results displayed significant structural changes. Furthermore, the existence of multiple styles within the same sketch category poses a challenge to sketch reconstruction. The proposed SketchEdit shows significant preservation of detail about sketch instances, which is the basis for our sketch editing task. 4.4 Ablation Study In this subsection, we discuss the effectiveness of the image decoder and the token mixture block. We conduct the ablation study on DS1. SketchEdit(wo_i) and SketchEdit(wo_s) denote that no image decoder is included and no token mixture block is shared between the two decoders, respectively. Table 3: The performance for sketch reconstruction with the original locations and the generated locations. | Model | Original Locations | Generated Locations | |---------------------|--------------------|--------------------| | | Rec(↑) | FID(↓) | LPIPS(↓) | CLIP-S(↑) | Rec(↑) | FID(↓) | LPIPS(↓) | CLIP-S(↑) | | SketchEdit(wo_i) | 83.56% | 3.80 | 0.15 | 96.34 | 78.19% | 4.45 | 0.30 | 94.36 | | SketchEdit(wo_s) | 84.20% | 3.21 | 0.12 | 96.45 | 78.41% | 3.93 | 0.29 | 94.42 | | SketchEdit(full) | **84.32%** | **3.12** | **0.11** | **96.73** | **78.79%** | **3.79** | **0.29** | **94.59** | Table 3 reports the results of the ablation experiments. SketchEdit(full) and SketchEdit(wo_s) with image encoders have performance advantages over Str2Seq(wo_c). This is because the use of image reconstruction allows the network to learn shape information and spatial relationships. Similarly, SketchEdit(wo_s) would make learning image-related information difficult for the token mixture block at the sequence decoder. As shown in Fig. 6, some strokes overlap in the results produced by SketchEdit(wo_s) and SketchEdit(wo_c) which reduces the quality of the recreated sketch. In addition, SketchEdit(full) has marginally fewer parameters compared to SketchEdit(wo_s) as it only employs a single token mixture block. Figure 6: Comparison of recreated sketches across various models in ablation studies. 5 Conclusion And Future Work In this paper, we develop the traditional sketch synthesis task to the more controllable sketch editing task at the stroke-level and propose the SketchEdit to realize it. We have focused on decoupling independent strokes from sketches to enable editing operations at the stroke-level. The core of our methodology is to employ the diffusion model to acquire reasonable positions and recreate meaningful sketches based on the strokes. Experimental results demonstrate that SketchEdit can edit sketches without altering categories and facilitate the production of innovative sketches across various categories. Meanwhile, SketchEdit which efficiently preserves the spatial structure of sketches and supports the parallel reconstruction of sketch sequences, surpasses the state-of-the-art methods significantly in the sketch reconstruction task. While our work contributes to the research on the controllability of sketch generation, there remain a number of issues that require further improvement in the future. (i) One aspect is more flexible control, e.g., given no reference strokes, the model is able to automatically obtain a large number of reasonable strokes to replace the ones to be edited. (ii) Although our technique is capable of producing high-quality outcomes, certain strokes are subject to over-smoothing, resulting in dissimilarities from human drawing styles. Therefore, it is worthwhile further exploring the design of models that align with human drawing styles and efficiently generate sequences. (iii) The design of metrics also a tricky issue. Most of the existing metrics for measuring the results of image generation are based on natural images rather than abstract sketches. Thus, the development of novel sketch evaluation metrics for recreated or edited sketches also constitute a key aspect of forthcoming research. 6 REPRODUCIBILITY STATEMENT To ensure the reproducibility of the proposed methodology, the details of each module are provided in the Appendix. The full project code and the detailed usage are provided in the supplementary material. REFERENCES Yajing Chen, Shikui Tu, Yuqi Yi, and Lei Xu. Sketch-pix2seq: a model to generate sketches of multiple categories. *arXiv preprint arXiv:1709.04121*, 2017. Ayan Das, Yongxin Yang, Timothy M Hospedales, Tao Xiang, and Yi-Zhe Song. Cloud2curve: Generation and vectorization of parametric sketches. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 7088–7097, 2021. Ayan Das, Yongxin Yang, Timothy Hospedales, Tao Xiang, and Yi-Zhe Song. Chirodiff: Modelling chirographic data with diffusion models. *arXiv preprint arXiv:2304.03785*, 2023. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. *Advances in neural information processing systems*, 34:8780–8794, 2021. Songwei Ge, Vedanuj Goswami, Larry Zitnick, and Devi Parikh. Creative sketch generation. In *International Conference on Learning Representations*, 2020. David Ha and Douglas Eck. A neural representation of sketch drawings. *arXiv preprint arXiv:1704.03477*, 2017. Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. *arXiv preprint arXiv:2104.08718*, 2021. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. *Advances in neural information processing systems*, 30, 2017. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. *Advances in neural information processing systems*, 33:6840–6851, 2020. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 9(8):1735–1780, 1997. Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. *arXiv preprint arXiv:1611.05148*, 2016. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998. Hanxiao Liu, Zihang Dai, David So, and Quoc V Le. Pay attention to mlps. *Advances in Neural Information Processing Systems*, 34:9204–9215, 2021. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. *arXiv preprint arXiv:1711.05101*, 2017. Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, and Xiaohu Qie. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. *arXiv preprint arXiv:2302.08453*, 2023. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. *arXiv preprint arXiv:2112.10741*, 2021.
o5Bqa4o5Mi
Why don't you use ranking metrics (Mean Average Precision, DCG, etc)? These seems quite relevant for evaluating methods for offline policy selection. As you mentioned, absolute error in terms of value prediction may not be always relevant.
π2vec: Policy Representation with Successor Features Gianluca Scarpellini* † Istituto Italiano di Tecnologia Ksenia Konyushkova Google DeepMind Claudio Fantacci Google DeepMind Tom Le Paine Google DeepMind Yutian Chen Google DeepMind Misha Denil Google DeepMind †: Work done during an internship at Google DeepMind *: Corresponding author gianluca.scarpellini@iit.it Abstract This paper introduces π2vec, a method for representing black box policies as comparable feature vectors. Our method combines the strengths of foundation models that serve as generic and powerful state representations and successor features that can model the future occurrence of the states for a policy. π2vec represents the behaviors of policies by capturing statistics of how the behavior evolves the features from a pretrained model, using a successor feature framework. We focus on the offline setting where both policies and their representations are trained on a fixed dataset of trajectories. Finally, we employ linear regression on π2vec vector representations to predict the performance of held out policies. The synergy of these techniques results in a method for efficient policy evaluation in resource constrained environments. 1 Introduction Robot time is an important bottleneck in applying reinforcement learning in real life robotics applications. Constraints on robot time have driven progress in sim2real, offline reinforcement learning (offline RL), and data efficient learning. However, these approaches do not address the problem of policy evaluation which is often time intensive as well. Various proxy metrics were introduced to eliminate the need for real robots in the evaluation. For example, in sim2real we measure the performance in simulation [Lee et al., 2021]. In offline RL we rely on Off-policy Evaluation (OPE) methods [Gulcehre et al., 2020; Fu et al., 2021]. For the purpose of deploying a policy in the real world, recent works focused on Offline Policy Selection (OPS), where the goal is to select the best performing policy relying only on offline data. While these methods are useful for determining coarse relative performance of policies, one still needs time on real robot for more reliable estimates [Levine et al., 2020]. Our proposed π2vec aims at making efficient use of the evaluation time. Efficient offline policy evaluation and selection is relevant in reinforcement learning projects, where researchers often face the challenge of validating improvements. π2vec enables researchers to make more informed decisions regarding which new policy iterations to prioritize for real-world testing or to identify and discard less promising options early in the development process. In particular, we predict the values of unknown policies from a set of policies with known values in an offline setting, where a large dataset of historical trajectories from other policies and human demonstrations is provided. The last step requires policies to be represented as vectors which are comparable and thus can serve as an input to the objective function. Prior work from [Konyushkova et al., 2021] represents policies by the actions that they take on a set of canonical states, under the assumption that similar actions in similar states imply similar behaviour. However, this assumption is sometimes violated in practice. This work aims at finding more suitable representation by characterizing the policies based on how they change the environment. To represent policies, our method π2vec combines two components: successor features and foundation models. We adapt the framework of Q-learning of successor features [Barreto et al., 2017] to the Figure 1: $\pi$2vec method relies on the successor feature framework, that we adopt in combination with a dataset of offline demonstrations and a visual foundation model $\phi$. $\pi$2vec represents each policy $\pi_i$ as a feature vector $\Psi_{\pi_i}^\phi \in \mathbb{R}^n$. $\Psi_{\pi_i}^\phi$ encodes the expected behavior of a policy when deployed on an agent. offline setting by applying the Fitted Q evaluation (FQE) algorithm (Le et al., 2019) which is typically used for off-policy evaluation (OPE). In this work the features for individual states are provided by a general purpose pretrained visual foundation model (Bommasani et al., 2021). The resulting representations can be used as a drop in replacement for the action-based representation used by Konyushova et al. (2021). Our experiments show that $\pi$2vec achieves solid results in different tasks and across different settings. To summarize, our main contributions are the following: - We propose $\pi$2vec, a novel policy representation of how the policies change the environment, which combines successor features, foundation models, and offline data; - We evaluate our proposal through extensive experiments predicting return values of held out policies in 3 simulated and 2 real environments. Our approach outperforms the baseline and achieves solid results even in challenging real robotic settings and out-of-distribution scenarios; - We investigate various feature encoders, ranging from semantic to geometrical visual foundation models, to show strengths and weaknesses of various representations for the task at hand. 2 RELATED WORK Representation of black-box policies. In this paper, our objective is to create vector representations for policies to predict their performance. We treat policies as black-boxes (i.e., no access to internal state, parameters, or architectures) that yield actions for a given observation. It is important to emphasize that our objective differs from representation learning for RL (Schwarzer et al., 2020; Iaderberg et al., 2016; Laskin et al., 2020), as we focus on representing policies rather than training feature encoders for downstream tasks. Konyushova et al. (2021) studied a setting where the goal is to identify the best policy from a set of policies with a dataset of offline experience and limited access to the environment. Each policy is represented by a vector of actions at a fixed set of states. While this representation performs well in certain applications, it may not be the most effective for predicting policy performance. For instance, consider two policies that generate random actions at each state. These policies do not exhibit meaningfully different behaviour, so for policy evaluation purposes, we expect them to be similar. However, the action policy representation categorizes these policies as different. This paper proposes a method to address this limitation by measuring trajectory-level changes in the environment. In BCRL (Chang et al., 2022), a state-action feature representation is proposed for estimating policy performance. However, the representation of each policy is independent of other policies and thus cannot be employed to regress the performance of new policies given a set of evaluated policies. Offline Policy Evaluation. Off-policy Evaluation (OPE) aims to evaluate a policy given access to trajectories generated by another policy. It has been extensively studied across many domains (Li et al., 2010; Theocharous et al., 2015; Kalashnikov et al., 2018; Nie et al., 2019). Broad categories of OPE methods include methods that use importance sampling (Precup, 2000), binary classification (Irpan et al., 2019), stationary state distribution (Liu et al., 2018), value functions (Sutton et al., 2016). and learned transition models (Zhang et al., 2021), as well as methods that combine two or more approaches (Farajtabar et al., 2018). The main focus of the OPEs approaches is on approximating the return values function for a trained policy, while π2vec goes beyond classical OPE and focuses on encoding the behavior of the policy as vectors, in such a way that those vectors are comparable, to fit a performance predictor. **Foundation Models for Robotics.** Foundation models are large, self-supervised models (Bommasani et al., 2021) known for their adaptability in various tasks (Sharma et al., 2023). We compare three representative foundation models (Radford et al., 2021; Dosovitskiy et al., 2021; Doersch et al., 2022). Our proposal, π2vec, is independent of the feature encoder of choice. Better or domain-specific foundation models may improve results but are not the focus of this study. ### 3 METHODOLOGY #### 3.1 OVERVIEW Our setting is the following. We start with a large dataset of historical trajectories \( \mathbb{D} \), and a policy-agnostic state-feature encoder \( \phi : S \rightarrow \mathbb{R}^N \). Given a policy \( \pi \), our objective is to use these ingredients to create a policy embedding \( \Psi_{\phi}^{\pi} \in \mathbb{R}^N \) that represents the behavior of \( \pi \) (and can be used to predict its performance). We aim to create this embedding offline, without running the policy \( \pi \) in the environment. Although we can evaluate \( \pi \) for any state in our historical dataset \( \mathbb{D} \), we emphasize that we do not have access to any on policy trajectories from \( \pi \), which significantly complicates the process of creating an embedding that captures the behavior of \( \pi \). Our method π2vec has three steps: 1. Choose a policy-agnostic state-feature encoder \( \phi \). We discuss several options for \( \phi \) below and in the experiments; however, π2vec treats the policy-agnostic state-feature encoder as a black box, allowing us to leverage generic state-feature representations in our work. 2. Train a policy-specific state-feature encoder \( \psi_{\phi}^{\pi} : (S, A) \rightarrow \mathbb{R}^N \). In this step we combine the policy-agnostic state-feature encoder \( \phi \), and the policy \( \pi \), to create policy-specific state-feature encoder by training on the historical dataset \( \mathbb{D} \). The policy-specific state features \( \psi_{\phi}^{\pi}(s) \) capture statistics of how \( \pi \) would change the environment were it to be run starting from the state \( s \). 3. Aggregate the policy-specific state-features to create state-agnostic policy features \( \Psi_{\phi}^{\pi} \) that represent the behavior of \( \pi \) in a state-independent way. Using the steps outlined above we can collect a dataset of policy-specific state-independent features paired with measured policy performance. This dataset can be used to train a model that predicts the performance of a policy from its features using supervised learning. Because we compute features for a policy using only offline data, when we receive a new policy we can compute its policy-specific state-independent features and apply the performance model to predict its performance before running it in the environment. In the following sections we expand on each step. #### 3.2 POLICY-AGNOSTIC STATE FEATURES The role of the state-feature encoder \( \phi \) is to produce an embedding that represents an individual state of the environment. In this paper we focus on state encoders \( \phi : I \rightarrow \mathbb{R}^N \) that consume single images \( I \). Generically our method is agnostic to the input space of the state-feature encoder, but practically speaking it is convenient to work with image encoders because that gives us access to a wide range of pretrained generic image encoders that are available in the literature. We also consider a few simple ways to construct more complex features from single image features. When each state provides multiple images we embed each image separately and sum the result to create a state embedding. We also consider creating embeddings for transitions \( (s, s') \) by computing \( \Delta \phi(s, s') = \phi(s') - \phi(s) \). Both cases allow us to leverage features from pretrained models. Figure 2: Given a trajectory from the dataset of offline demonstrations, we train successor feature $\psi^\phi_\pi(s_t)$ to predict the discounted sum of features $\sum_i \gamma^i \phi(s_{t+i})$, where $\phi$ is a visual feature extractor and $\pi$ is a policy. Intuitively, $\phi(s_t)$ represents semantic changes in the current state of the environment $s_t$, while successor feature $\psi^\phi_\pi(s_t)$ summarizes all future features encoded by $\phi$ if actions came from policy $\pi$. 3.3 Policy-specific State Features The next step is to use the policy-agnostic state-feature encoder $\phi$ that provides a generic representation for individual states to train a policy-specific state-feature encoder $\psi^\phi_\pi : (S, A) \rightarrow \mathbb{R}^N$ that represents the effect that $\pi$ would have on the environment if it were run starting from the given state. The work of Dayan (1993); Barreto et al. (2017) on successor features provides a basis for our approach to policy representation. We briefly review successor features here, and comment below on how we make use of them. We refer the reader to recent literature covering successor features Lehnert & Littman (2020); Brantley et al. (2021); Reinke & Alameda-Pineda (2021). Suppose that the reward function for a task can be written as a linear function $$r(s, a, s') = \langle \phi(s, a, s'), w_{\text{task}} \rangle,$$ where $\phi(s, a, s') \in \mathbb{R}^N$ encodes the state-transition as a feature vector and $w_{\text{task}} \in \mathbb{R}^N$ are weights. Barreto et al. (2017) observe that if the reward can be factored as above, then the state-action-value function for a policy $\pi$ can be written as $$Q^\pi(s, a) = \mathbb{E}_{(s'|s) \sim D, a \sim \pi(s)} \left[ \sum_{i=t}^{\infty} \gamma^{i-t} r(s_i, a_i, s_{i+1}) \right] = \langle \psi^\phi_\pi(s, a), w_{\text{task}} \rangle,$$ where $$\psi^\phi_\pi(s, a) = \mathbb{E}_{(s'|s) \sim D, a \sim \pi(s)} \left[ \sum_{i=t}^{\infty} \gamma^{i-t} \phi(s_i, a_i, s_{i+1}) \right],$$ $(s|s') \sim D$ is a transition from the environment, and $\gamma$ is the discount factor. The corresponding state-value function is $V^\pi(s) \triangleq Q^\pi(s, \pi(s)) = \langle \psi^\phi_\pi(s, \pi(s)), w_{\text{task}} \rangle \triangleq \langle \psi^\phi_\pi(s), w_{\text{task}} \rangle$. We will use the notation $\psi^\phi_\pi(s) \triangleq \psi^\phi_\pi(s, \pi(s))$ frequently throughout the remainder of the paper. The value of $\psi^\phi_\pi(s)$ is known as the successor features of the state $s$ under the policy $\pi$. Successor features were originally motivated through the above derivation as a way of factoring the value function of a policy into a task-independent behavior component (the successor features) that is independent of the task, and a task-dependent reward component that is independent of behavior. For our purposes we will mostly ignore the reward component (although we return to it in one of the experiments) and focus on the behavior term shown in Equation 3. This term is interesting to us for two reasons. First, we can see by inspection of the RHS that the value of $\psi^\phi_\pi(s) = \psi^\phi_\pi(s, \pi(s))$ represents the behavior of $\pi$ as a future discounted sum of state features along a trajectory obtained by running $\pi$ beginning from the state $s$. In other words, $\psi^\phi_\pi$ represents the behavior of $\pi$ in terms of the features of the states that \( \pi \) will encounter, where the state features are themselves given by the policy-agnostic state-feature encoder from the previous section. Figure 2 summarizes the relationship between successor features \( \psi \) and state encoders \( \phi \). Second, Equation 3 satisfies the Bellman equation meaning that the function \( \psi_\phi^\pi(s, a) \) can be estimated from off-policy data in a task-agnostic way using a modified version of Q-learning, where the scalar value reward in ordinary Q-learning is replaced with the vector valued transition features \( \phi(s, a, s') \). We rely on Fitted Q Evaluation (FQE, Le et al. (2019)), an offline Q-learning based algorithm, and thus, we obtain a representation of policy behavior purely from data without executing the policy in the environment. Given a dataset \( D \) and a policy \( \pi \), FQE estimates its state-action-value function \( Q^\pi(s, a) \) according to the following bootstrap loss: \[ L(\theta) = \mathbb{E}_{(s, a, r, s') \sim D, a' \sim \pi(s')} \left[ \| \psi_\phi^\pi(s, a) - (\phi(s, a, s') + \psi_\phi^\pi(s', a')) \|_2^2 \right]. \] (4) FQE is simple to implement and it performs competitively with other OPE algorithms in a variety of settings (Fu et al., 2021) including simulated and real robotics domains (Paine et al., 2020; Konyushova et al., 2021). We use FQE with our historical dataset \( D \) to train a policy-specific state-action-feature network \( \psi_\phi^\pi(s, a) \), which we then use as the policy-specific state-feature encoder \( \psi_\phi^\pi(s) \triangleq \psi_\phi^\pi(s, \pi(s)) \) by plugging in the policy action. ### 3.4 State-Agnostic Policy Features We obtain a single representation \( \Psi_\phi^\pi \) of a policy \( \pi \) from the state-dependent successor features \( \psi_\phi^\pi(s) \) for that policy by averaging the successor features over a set of canonical states: \[ \Psi_\phi^\pi = \mathbb{E}_{s \sim D_{can}} [\psi_\phi^\pi(s)], \] (5) where \( D_{can} \) is a set of states sampled from historical trajectories. We sample the canonical states set \( D_{can} \subset D \) uniformly from our historical dataset, as in Konyushova et al. (2021), ensuring that each canonical state comes from a different trajectory for better coverage. We average successor features over the same set \( D_{can} \) for every policy. The intuition behind this representation is that \( \psi_\phi^\pi(s) \) represents the expected change that \( \pi \) induces in the environment by starting in the state \( s \); by averaging over \( D_{can} \), \( \Psi_\phi^\pi \) represents an aggregated average effect of the behavior of \( \pi \). ### 3.5 Performance Prediction We aim at predicting the performance of novel, unseen policies. We begin with a dataset of historical policies for which we have measured performance \( \Pi = \{\ldots, (\pi_i, R_i), \ldots\} \). For each policy in this dataset we create an embedding using the above procedure to obtain a new dataset \( \{\ldots, (\Psi_\phi^\pi_i, R_i), \ldots\} \) and then train a performance model \( R_i = f(\Psi_\phi^\pi_i) \) using supervised learning. Given a new policy \( \pi_* \) we can then predict its performance before running it in the environment by computing the \( \pi \)-2vec features for the new policy using the above procedure and applying the performance model to obtain \( \hat{R}_* = f(\Psi_\phi^{\pi_*}) \). ### 4 Experimental Setup In this section we describe the feature encoders, domains, and evaluation procedures, followed by details about our baselines. More details about our architecture, domains, and training procedure can be found in the Appendix. #### Feature encoder Firstly, the Random feature encoder employs a randomly-initialized ResNet-50 (He et al., 2016). Random features are trivial to implement, and achieve surprisingly strong performance in many settings (Rahimi & Recht, 2007). Here they serve as a simple baseline. Next, we explore with CLIP (Radford et al., 2021). CLIP-network is trained to match image and text embeddings on a large-scale dataset of image caption pairs. Intuitively, by aligning image and text features, CLIP network is trained to encode high-level semantic information. Visual Transformers (VIT) (Dosovitskiy et al., 2021) treat images as a 1D sequence of patches and learn visual features via an attention mechanism. In our experiments the visual transformer is pre-trained on imagenet classification. Figure 3: We adopt 5 environments. (i) Kitchen: 5 tasks (Knob-on, Left door open, light on, microwave open, and right door open) and 3 points of views. (ii) Metaworld: 4 tasks (assembly, button press, bin picking, and drawer open) and 3 points of views. (iii) Insert gear in simulation (iii) and (iv) on a real robot. (v) RGB stacking on a real robot. Lastly, we explore Track-any-point (TAP) (Doersch et al., 2022), a general-purpose network for point tracking in videos. The network is pre-trained to track arbitrary points over video sequences and as a result it learns to understand the low-level geometric features in a scene. We use an attention layer trained to select task-relevant features from the TAP model to reduce dimensionality. This set of feature encoders spans a spectrum of properties as they are created by optimising different objectives. At one extreme CLIP features are trained to align image features with a text description, and encode the semantics of the image. At the other extreme TAP features are trained to track points in videos, and capture low level geometric and texture information. ViT features are in the middle, they need to encode both semantics and local texture to accomplish classification tasks. Depending on the environment and task at hand, better state representation is likely to result in better prediction properties of π2vec. We leave the question of finding the best representation as future work. Domains. We present extensive experiments to support π2vec’s capabilities across three simulated domains—Insert Gear (Sim), Metaworld, and Franka-Kitchen, and two real domains—Insert Gear (Real) and RGB Stacking (Figure 3). In each domain we use a dataset of offline human demonstrations (Metaworld and Kitchen) and held out policies trajectories (RGBStacking and Insert Gear) for training policy representations. Each policy is treated as a black-box where we do not have any prior knowledge about the architecture or training parameters. We provide further details in Supplementary. Evaluation. We assess the quality of the policy representations by measuring the ability of the model \( f \) to predict the performance of held out policies (see Section 3.5). We adopt k-fold cross validation over the set \( \Pi \) and report results averaged over cross validation folds. Following previous works on offline policy evaluation (Paine et al., 2020; Fu et al., 2021), we adopt the following three complementary metrics. We report further details in the Supplementary. - **Normalized Mean Absolute Error (NMAE)** measures the accuracy of the prediction w.r.t. the ground-truth. We adopt MAE instead of MSE to be robust to outliers and we normalize the error to be in range between the return values for each environment. Lower is better. - **Rank Correlation** measures how the estimated values correlate with the ground-truth. Correlation focuses on how many evaluations on the robot are required to find the best policy. Higher is better. - **Regret@1** measures the performance difference between the best policy and the predicted best policy, normalized w.r.t. the range of returns values for each environment. Lower is better. Correlation and Regret@1 are the most relevant metric for evaluating π2vec on OPS. On the other hand, NMAE refers to the accuracy of the estimated return value and is suited for OPE. Baselines. The problem in this paper is to represent policies in such a way that the representations can be used to predict the performance of other policies given the performance of a subset of policies. Importantly, to address this problem the representation should 1) encode the behavior of the policy, 2) in a way that is comparable with the representations of other policies, and 3) does not require online Table 1: We compare $\pi$2vec and Actions representations for Insert-gear (real) and Insert-gear (sim) tasks, as well as for the RGB stacking environment. The table shows the performance and confidence intervals for different feature representations and encoders. | Representation | NMAE ↓ | Correlation ↑ | Regret@1 ↓ | |---------------|--------|--------------|------------| | | | | | | **RGB Stacking** | | | | | Actions | 0.261 ±0.045 | **0.785** ±0.177 | 0.074 ±0.083 | | VIT | **0.224** ±0.063 | 0.775 ±0.146 | **0.036** ±0.116 | | ΔVIT | 0.344 ±0.050 | 0.030 ±0.332 | 0.375 ±0.206 | | CLIP | 0.330 ±0.042 | 0.342 ±0.293 | 0.325 ±0.180 | | ΔCLIP | 0.287 ±0.048 | 0.583 ±0.126 | 0.079 ±0.126 | | Random | 0.304 ±0.066 | 0.330 ±0.334 | 0.226 ±0.177 | | ΔRandom | 0.325 ±0.109 | 0.352 ±0.348 | 0.190 ±0.180 | | | | | | | **Insert gear (real)** | | | | | Actions | 0.252 ±0.028 | -0.545 ±0.185 | 0.578 ±0.148 | | Random | 0.275 ±0.027 | -0.207 ±0.267 | 0.360 ±0.162 | | CLIP | **0.198** ±0.030 | **0.618** ±0.136 | **0.267** ±0.131 | | ΔCLIP | 0.253 ±0.228 | -0.109 ±0.100 | 0.429 ±0.100 | | | | | | | **Insert gear (sim)** | | | | | Actions | 0.174 ±0.015 | 0.650 ±0.056 | 0.427 ±0.172 | | Random | 0.215 ±0.026 | 0.555 ±0.104 | 0.422 ±0.143 | | TAP | **0.164** ±0.022 | **0.680** ±0.095 | 0.359 ±0.184 | | VIT | 0.224 ±0.025 | 0.402 ±0.129 | 0.448 ±0.195 | | ΔVIT | 0.255 ±0.024 | 0.218 ±0.139 | 0.457 ±0.153 | | CLIP | 0.180 ±0.031 | 0.502 ±0.068 | **0.298** ±0.126 | | ΔCLIP | 0.189 ±0.020 | 0.586 ±0.077 | 0.314 ±0.147 | data. Active Offline Policy Selection (AOPS) (Konyushova et al., 2021) stands alone as a notable work that delves into policy representation from offline data with the task of deciding which policies should be evaluated in priority to gain the most information about the system. AOPS showed that representing policies according to its algorithm leads to faster identification of the best policy. In AOPS’s representation, which we call “Actions”, policies are represented through the actions that the policies take on a fixed set of canonical states. We build Actions representation as follows. We run each policy $\pi$ on the set of states $D_{can}$ sampled from historical trajectories. Next, we concatenate the resulting set of actions $\{\pi(s)\}_{s \in D_{can}}$ into a vector. To the best of our knowledge, the Actions representation is the only applicable baseline in the setting that we adopt in this paper. Nevertheless, OPE methods that estimate policy performance from a fixed offline dataset are standard methodology in offline RL literature. Although these methods do not take the full advantage of the problem setting in this paper (the performance of some of the policies is known) they can still serve for comparison. In this paper, we compared against FQE which is a recommended OPE method that strikes a good balance between performance (it is among the top methods) and complexity (it does not require a world model) (Fu et al., 2021). ## 5 RESULTS We report results for various feature encoders for Insert gear (sim and real) and RGBStacking. Similarly, we report averaged results for over 4 tasks and 3 point of view for Metaworld and over 5 tasks and 3 point of view for Kitchen. Along with results for each feature encoder, we report the average results of picking the best feature encoder for each task (BEST-$\phi$). Similarly, we report as BEST-CLIP and BEST-VIT the average results when adopting the best feature encoder between CLIP/VIT and ΔCLIP/ΔVIT. We identify the best feature encoder for a task by conducting cross-validation on previously evaluated policies and pick the best encoder in terms of regret@1. Our results demonstrate that (i) $\pi$2vec outperforms the Actions baseline models consistently across real and simulated robotics environments and multiple tasks, showcasing the framework’s effectiveness in representing policies. Furthermore, we demonstrate the applicability to real-world robotic settings, specifically in the challenging Insert Gear (Real) environment, where even underperforming policies contribute to improved policy evaluation. We show that choosing the best model as Table 2: We evaluate π2vec on Metaworld and Kitchen. The results are averaged over all settings and confidence intervals are reported. BEST-ϕ is π2vec average performance assuming that we adopt the best ϕ in terms of regret@1 for each task-POV setting. Similarly, BEST-CLIP and BEST-VIT are the best feature encoder between CLIP/VIT and ΔCLIP/ΔVIT. | Representation | NMAE ↓ | Correlation ↑ | Regret@1 ↓ | |---------------|--------|--------------|------------| | **Metaworld** | | | | | Actions | 0.424 ±0.058 | 0.347 ±0.152 | 0.232 ±0.078 | | CLIP | 0.340 ±0.035 | 0.254 ±0.143 | 0.250 ±0.076 | | ΔCLIP | 0.325 ±0.092 | 0.286 ±0.154 | 0.232 ±0.086 | | BEST-CLIP | 0.309 ±0.027 | 0.351 ±0.130 | 0.194 ±0.076 | | VIT | 0.303 ±0.030 | 0.280 ±0.146 | 0.263 ±0.091 | | ΔVIT | 0.315 ±0.026 | 0.162 ±0.169 | 0.325 ±0.084 | | BEST-VIT | 0.298 ±0.029 | 0.300 ±0.147 | 0.244 ±0.092 | | Random | 0.366 ±0.086 | 0.043 ±0.150 | 0.375 ±0.108 | | BEST-ϕ | **0.289 ±0.018** | **0.460 ±0.099** | **0.153 ±0.060** | | **Kitchen** | | | | | Actions | 0.857 ±0.128 | 0.326 ±0.128 | 0.221 ±0.089 | | CLIP | 0.417 ±0.032 | 0.021 ±0.219 | 0.317 ±0.081 | | ΔCLIP | 0.352 ±0.026 | 0.260 ±0.216 | 0.244 ±0.081 | | BEST-CLIP | 0.333 ±0.025 | 0.346 ±0.200 | 0.197 ±0.076 | | VIT | 0.385 ±0.030 | 0.030 ±0.244 | 0.322 ±0.095 | | ΔVIT | 0.344 ±0.025 | 0.155 ±0.234 | 0.251 ±0.082 | | BEST-VIT | **0.321 ±0.024** | **0.412 ±0.228** | **0.151 ±0.068** | | Random | 0.382 ±0.033 | -0.017 ±0.225 | 0.334 ±0.080 | | BEST-ϕ | 0.392 ±0.053 | **0.591 ±0.203** | **0.070 ±0.045** | a feature-extractor greatly improves results (ii). Finally, we adopt π2vec to solve Equation 2 and estimate policies’ return values in the Metaworld’s assembly environment, without relying on any ground-truth data (iii). Although the successor feature assumption of linearity of rewards is violated, π2vec still ranks policies competitively in the offline setting when compared to FQE. In the Appendix, we provide an intuition for choosing the best ϕ based on the correlation between task difficulty (iv), and we study the effect of different dataset types, such as demonstrations and trajectories from held out policies (v). We investigate π2vec’s generalization capabilities (vi), including out-of-distribution scenarios (vii). We also demonstrate that π2vec represents random policies close in the feature space (viii), and that π2vec is robust to canonical state coverage (ix) and effective with online data (x). (i) π2vec consistently outperforms Actions. We compare π2vec and Actions across all scenarios. Our method outperforms Actions representation when predicting values of unseen policies in both real robotics scenarios—RGB stacking and insert-gear (real)—as shown in Table 1. In the former, ΨVIT achieves regret@1 of 0.036 compared to Actions’ 0.074, with a relative improvement of 51%. In the latter, ΦCLIP improves over Actions by achieving regret@1 0.267 compared to Actions’ 0.578 and drastically outperform Actions in terms of correlation by achieving +0.618 compared to Actions’ −0.545. π2vec performs robustly on insert gear (real) despite policies’ performances for this task vary greatly (see supplementary for per-task policies performances). We also evaluate our approach in the simulated counterpart Insert Gear (Sim). In this environment, ΨCLIP and ΨTAP achieve regret@1 of 0.314 and 0.359 respectively, compared to Actions 0.427. We underline the dichotomy between geometrical and semantic features: ΨTAP performs best in terms of NMAE and Correlation, while ΨCLIP outperforms in Regret@1. These results highlight how various ϕ compare depending on setting, type of task, and policy performance. (ii) When evaluating across multiple settings, selecting ϕ leads to better results. We compare π2vec with different foundation models across 12 Metaworld settings and 15 Kitchen settings. Table 2 reports the average results across all settings for Metaworld and Kitchen. In Metaworld, we notice that Actions performs on par with ΨCLIP, ΨVIT, and their respective variations ΔCLIP and ΔVIT, in terms of correlation and regret@1, while our approach consistently outperforms Actions in terms of NMAE. As these domains have less state variability, Actions represent policies robustly. We test CLIP/ΔCLIP Table 3: We extend π2vec to the fully-offline setting and test it on Metaworld assembly task (left, right, and top). We report results and confidence intervals. In this setting, performances of all policies are unknown. | Representation | NMAE ↓ | Correlation ↑ | Regret@1 ↓ | |---------------|--------|--------------|------------| | FQE | **0.338±0.062** | 0.125 ±0.218 | 0.424±0.260 | | π2vec | 8.306 ±0.155 | **0.360±0.097** | **0.215±0.079** | | | Assembly (left) | | | | FQE | **0.270±0.093** | -0.029±0.351 | 0.504±0.071 | | π2vec | 2.116 ±0.056 | **0.154±0.115** | **0.319±0.080** | | | Assembly (right) | | | | FQE | **0.322±0.012** | -0.251±0.516 | 0.609±0.228 | | π2vec | 0.492±0.006 | **0.555±0.106** | **0.149±0.071** | | | Assembly (top) | | | and VIT/ΔVIT on previously evaluated policies for each task through cross-validation to identify the best feature encoder for the task in terms of regret@1. We report Ψ^BEST-CLIP and Ψ^BEST-VIT as the average results over the best among CLIP/VIT and ΔCLIP/ΔVIT. Ψ^BEST-CLIP achieves regret@1 0.194 and correlation 0.351, outperforming Actions representation. We highlight that the choice of ϕ is critical, since Ψ^random—using a randomly-initialized ResNet50 as feature extractor—underperforms. Moreover, π2vec with the best ϕ drastically improves, achieving regret@1 of 0.153 compared to Actions 0.232. We notice similar improvements when evaluating on Kitchen’s 15 settings. Table 2 compares choosing the BEST ϕ w.r.t. to VIT and CLIP, and against Actions. In Kitchen, Ψ^VIT outperforms Ψ^CLIP and Actions, while Ψ^BEST−ϕ achieves the overall best results. (iii) π2vec enables fully-offline policy selection. By directly modelling the relationship between successor features and returns, we avoid the linear reward assumption of the original successor features work. This is preferable since rewards are generally not linearly related to state features. However, this restricts our method to settings where some policies’ performance is known. To evaluate performance in a fully-offline setting, we fit a linear model the task reward \( \hat{r} = \langle \phi(s), w_{\text{task}} \rangle \) given the state’s feature representation \( \phi(s) \), as in Equation 2 from the original successor features work. Next we predict policies returns as \( \hat{R}_i = \langle \Psi^\phi_{\pi_i}, w_{\text{task}} \rangle \). We compare our approach to FQE in Table 3 and find that while our method’s return predictions are inaccurate (as evidenced by the high NMAE), it still performs well in ranking policies (higher Correlation and lower Regret@1). 6 CONCLUSION We presented π2vec, a framework for offline policy representation via successor features. Our method treats the policy as a black box, and creates a representation that captures statistics of how the policy changes the environment rather than its idiosyncrasies. The representations can be trained from offline data, and leverage the pretrained features of visual foundation models to represent individual states of the environment. In our experiments, we represented policies by relying on visual features from semantic (CLIP), geometric (TAP), and visual (VIT) foundation models. We showed that π2vec outperforms previously used Actions based representations and generalizes to fully-offline settings. Overall, our experiments showcase the effectiveness and versatility of π2vec in representing policies and its potential for various applications in reinforcement learning. Moving forward, we acknowledge that finding the optimal combination of these elements remains an ongoing challenge. Future work should explore diverse foundation models, offline learning algorithms for successor feature training, and dataset choices. Fine-tuning the feature encoder \( \phi \) along with \( \psi^\phi_\theta \) is interesting but pose challenges, as each feature encoder would specialize to predict features for a specific policy, resulting in policy representations that are independent and not comparable. We leave end-to-end fine-tuning as future work. Integrating π2vec into AOPS framework (Konyushova et al., 2021) for enhanced offline policy selection is another intriguing avenue. Additionally, extending π2vec to augment the Generalized Policy Improvement (Barreto et al., 2017) in offline settings presents exciting research opportunities. REFERENCES André Barreto, Will Dabney, Rémi Munos, Jonathan J Hunt, Tom Schaul, Hado P van Hasselt, and David Silver. Successor features for transfer in reinforcement learning. *Advances in neural information processing systems*, 30, 2017. Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In *International Conference on Machine Learning*, pp. 449–458. PMLR, 2017. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. *arXiv preprint arXiv:2108.07258*, 2021. Kianté Brantley, Soroush Mehri, and Geoff J Gordon. Successor feature sets: Generalizing successor representations across policies. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pp. 11774–11781, 2021. Jonathan Chang, Kaiwen Wang, Nathan Kallus, and Wen Sun. Learning bellman complete representations for offline policy evaluation. In *International Conference on Machine Learning*, pp. 2938–2971. PMLR, 2022. Peter Dayan. Improving generalization for temporal difference learning: The successor representation. *Neural computation*, 5(4):613–624, 1993. Carl Doersch, Ankush Gupta, Larisa Markeeva, Adrià Recasens, Lucas Smaira, Yusuf Aytar, João Carreira, Andrew Zisserman, and Yi Yang. Tap-vid: A benchmark for tracking any point in a video. *arXiv preprint arXiv:2211.03726*, 2022. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=YicbFdNTTy. Yuqing Du, Ksenia Konyushkova, Misha Denil, Akhil Raju, Jessica Landon, Felix Hill, Nando de Freitas, and Serkan Cabi. Vision-language models as success detectors. *arXiv preprint arXiv:2303.07280*, 2023. Mehrdad Farajtabar, Yinlam Chow, and Mohammad Ghavamzadeh. More robust doubly robust off-policy evaluation. pp. 1447–1456, 2018. Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning, 2020. Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R Zhang, Yutian Chen, Aviral Kumar, et al. Benchmarks for deep off-policy evaluation. *arXiv preprint arXiv:2103.16596*, 2021. Caglar Gulcehre, Ziyu Wang, Alexander Novikov, Thomas Paine, Sergio Gómez, Konrad Zolna, Rishabh Agarwal, Josh S Merel, Daniel J Mankowitz, Cosmin Paduraru, et al. Rl unplugged: A suite of benchmarks for offline reinforcement learning. *Advances in Neural Information Processing Systems*, 33:7248–7259, 2020. Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, and Karol Hausman. Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning. *arXiv preprint arXiv:1910.11956*, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. Alexander Irpan, Kanishka Rao, Konstantinos Bousmalis, Chris Harris, Julian Ibarz, and Sergey Levine. Off-policy evaluation via off-policy classification. *Advances in Neural Information Processing Systems*, 32, 2019.
jMJ9IRWmH9
Regarding the privacy-preserving backpropagation. Although $\partial l/\partial h$ is protected, the server still has clean $\partial h/\partial \theta$. This still leaks information about the label. The 2-dimensional TSNE of $\partial h/\partial \theta$ should be similar to $\partial l/\partial \theta$, because multiplying $\partial l/\partial h$ is only a linear operator.
PRIVACY-PRESERVING LLM FINE-TUNING OVER API Anonymous authors Paper under double-blind review ABSTRACT As deep learning models become larger and more expensive, many practitioners turn to fine-tuning APIs. These web services allow fine-tuning a model between two parties: the client that provides the data, and the server that hosts the model. While convenient, the fine-tuning APIs raise a new concern: the data of the client is at risk of privacy breach during the training procedure. This challenge presents an important practical case of vertical federated learning, where the two parties perform parameter-efficient fine-tuning (PEFT) of a large pre-trained model. In this study, we systematically search for a way to fine-tune models over an API while keeping the labels private. We analyze the privacy of popular algorithms for parameter-efficient fine-tuning when training over an API. Using this analysis, we propose P³EFT, a multi-party split learning algorithm that takes advantage of existing PEFT properties to maintain privacy at a lower performance overhead. To validate our algorithm, we fine-tune DeBERTa-v2-XXLarge and Flan-T5 using LoRA adapters on a range of common NLP tasks. We find that P³EFT is competitive with existing privacy-preserving methods in multi-party and two-party setups while having higher accuracy. 1 INTRODUCTION One of the main reasons behind deep learning success is its ability to transfer knowledge between tasks (Tan et al., 2018). When training a model for any particular problem, it is common to reuse previously trained models from other, related problems. In the past, this was typically done by downloading pre-trained model weights from public hubs, then fine-tuning the said models on the downstream task. However, as models grow larger and more compute-intensive, fine-tuning them locally becomes an increasingly difficult task. Furthermore, many recent models are not released, but instead made available as proprietary services. When a model cannot be fine-tuned locally, many practitioners opt instead for the so-called fine-tuning APIs. These APIs are web services backed by remote servers that host one or several pre-trained models and allow clients to perform limited fine-tuning. More specifically, APIs usually allow their clients to run parameter-efficient fine-tuning (PEFT), such as LoRA (Hu et al., 2022) or Prefix-tuning (Li & Liang, 2021). This is particularly necessary for large language models and image generative models, both of which are notoriously expensive to train. Most fine-tuning APIs have a single endpoint backed by a pool of servers of a particular organization, such as OpenAI API (OpenAI, 2023) or Hugging Face AutoTrain (Hugging Face, 2023) for fine-tuning language models and Dreambooth API (2023) or OctoAI API (OctoAI, 2023) for fine-tuning diffusion models. Recently, there have also appeared several decentralized fine-tuning systems, such as Petals (Borzunov et al., 2022). Although the fine-tuning APIs can be convenient, they also introduce new challenges and risks that were absent in local fine-tuning. If a client uses such API to fine-tune the model on sensitive data, they need to ensure that their data will stay private. This is particularly important when dealing with patient’s medical records, personal user data or trade secrets. The two main threats to data privacy are that the API provider obtains the private data and that a third party intercepts data in transit. Therefore, data privacy is not guaranteed even if the API provider is trusted. This forces many privacy-sensitive parties to avoid fine-tuning APIs and train their models locally, which is often less efficient and prevents them from using the state-of-the-art models. In this work, we seek to alleviate this problem by designing a two-party fine-tuning protocol that performs standard parameter-efficient fine-tuning with privacy guarantees. We formulate our protocol as a special case of split learning (or vertical federated learning), where one side (server) holds the pre-trained model and the other (client) has private training data. More specifically, we focus on the privacy of client’s training labels. While input privacy is also important, we found that inputs can often be anonymized or obfuscated by other means (see Section 2.1). Instead of developing a specific privacy-preserving architecture or training objective, we seek algorithms that can work with popular existing models and PEFT algorithms. Furthermore, our approach relies on some of the properties of parameter-efficient fine-tuning. Notably, since the adapters are compact, both parties can maintain multiple sets of adapters and swap between them with relative ease. This allows us to design a PEFT-specific algorithm that can solve its task more effectively than general split learning strategies. We summarize the main contributions of our work as follows: - We analyze common parameter-efficient fine-tuning algorithms from the perspective of label privacy. We observe that, despite fine-tuning less than 0.1% of model parameters, modern PEFT algorithms leak client’s training labels against simple attacks that work for modern pretrained transformers. - Based on our analysis, we formulate a framework for privacy-preserving parameter-efficient fine-tuning (P³EFT). This framework leverages the properties of PEFT to provably obfuscate the gradients communicated during fine-tuning with no impact on the fine-tuned model quality. - To verify the practical viability of P³EFT, we conduct experiments on popular real-world PEFT workloads. Notably, we fine-tune DeBERTa-v2-XXL (He et al., 2021) and Flan-T5 (Chung et al., 2022) on a set of standard language understanding problems. We find that, compared to prior split learning algorithms, P³EFT can maintain label privacy throughout training with significantly smaller accuracy drop. 2 BACKGROUND 2.1 FEDERATED LEARNING AND SPLIT LEARNING Privacy preservation in machine learning has been a subject of active study within several frameworks. An important branch of privacy-preserving learning methods is federated learning, or FL (McMahan et al., 2017), which can be broadly described as an approach allowing several parties to train a model jointly without sharing their private data. In particular, vertical federated learning (Hardy et al., 2017; Yang et al., 2019) targets the scenario where different features (including the label) of each training instance are kept by different parties. One of the most popular approaches to vertical FL for neural networks is split learning (Gupta & Raskar, 2018; Vepakomma et al., 2018), where each party stores its part of the overall model. To train the model in such an approach, it is only necessary to transfer the intermediate activations and the gradients between layers, while the data itself is stored at the premises of the participant hosting each layer. In this work, we focus on the two-party formulation of split learning, where one side stores the features for each example and another one stores the labels. Recent works have investigated the setting of two-party split learning from the label leakage perspective (Vepakomma et al., 2019; Pasquini et al., 2021): because the label party needs to pass the gradients of the loss function to the non-label party, it is possible for the latter party to deduce the labels by inspecting the gradients or activations or by hijacking the training procedure. Li et al. (2022) provide a set of attack methods that allow recovering private labels and propose a defense mechanism that injects noise into the gradients; however, they test the approach on pretraining smaller models, and we study finetuning large models on private downstream data. --- 1The code is available at github.com/iclr2023-anonymous/P3EFT 2.2 Parameter-Efficient Finetuning The majority of large neural networks today are not trained with a specific task in mind; instead, they are pretrained on a general objective and then adapted for the downstream problem. Importantly, the growth in the size of foundation models has led to the increased popularity of parameter-efficient finetuning (PEFT) methods that adapt the model to a given task by training a small number of task-specific parameters. There are several prominent approaches to parameter-efficient finetuning, ranging from trainable prompts (Li & Liang, 2021; Hambardzumyan et al., 2021), to residual adapters (Houlsby et al., 2019; Pfeiffer et al., 2021). We focus on Low-Rank Adaptation (or LoRA, Hu et al., 2022), one of the most popular PEFT methods that adds extra parameters to each weight matrix in the form of a low-rank factorization (see Appendix B for a more detailed description). Such formulation allows LoRA adapters to be merged into the original weights after finetuning; this ability, combined with the simplicity of the method, has made LoRA a broadly popular approach in multiple domains. Still, the approach we propose can be applied to any PEFT method. Importantly, the connections between data-private learning and parameter-efficient finetuning have been explored in several past works. One of the earlier works at the intersection of these areas is Yu et al. (2022); however, its primary focus is differential privacy, i.e., hiding the identity of each training example rather than hiding the training task itself. As also argued by Li et al. (2022), in the setting of split learning, the non-label party knows the participation of each example in the training procedure; therefore, differential privacy is not applicable in the conditions we study. Zhao et al. (2023) explore the viability of prompt tuning for federated learning and Zhang et al. (2023) study four PEFT algorithms in the setting of horizontal federated learning, comparing their task performance, communication costs, and privacy preservation capabilities. The primary distinction between our work and these studies is that we investigate parameter-efficient adaptation in the setting of split learning: instead of training over data split across workers, we aim to finetune a model without disclosing the labels of examples to the model provider. 3 Privacy-Preserving Parameter-Efficient Fine-Tuning In this section, we analyze the privacy of parameter-efficient fine-tuning and propose a protocol for two-party parameter-efficient fine-tuning with the desired privacy guarantees. We begin by analyzing the privacy of API fine-tuning with popular PEFT algorithms in Section 3.1. Then, in Section 3.2, we formulate a protocol for privately computing gradients over fine-tuning APIs. Finally, we formulate the full P3EFT protocol in Section 3.3. 3.1 Two-Party Split Fine-Tuning To analyze the privacy of API fine-tuning, we first need to formulate a common framework for this type of APIs and develop private learning protocols. This step is important, because existing fine-tuning APIs greatly vary in what they offer to the client. Notably, as of writing of this paper, most API providers ask users to submit their training data, perform fine-tuning with some undisclosed parameters, and returns a handle that can later be used to query the model. This approach offers no avenue for ensuring that client’s data is private from the provider. Furthermore, this type of API offers clients no flexibility in how they want to perform their fine-tuning. Another, more flexible type of fine-tuning API allows clients to run individual forward and backward passes over a remote model (Borzunov et al., 2022; Rao et al., 2021; Li et al., 2023). A client can use these APIs to obtain the training gradients for their PEFT adapters, then update adapters with any optimization method. In our work, we adopt this archetype of fine-tuning API as it offers sufficient flexibility to develop privacy-preserving algorithms. We formulate fine-tuning over an API for two or more parties: a client, and one or several servers. The client owns a training dataset with inputs $X$ and labels $Y$. In turn, each server has the same pre-trained model $h(x_i, \theta) \in \mathbb{R}^d$. Note that the parameters $\theta$ denote not the pre-trained model weights, but the trainable adapter weights for a certain PEFT algorithm. A model can encode an input $x_i \in X$ and produce a $d$-dimensional vector of hidden activations (learned input representations) that depend on the learned adapter weights $\theta$. To allow fine-tuning, each server offers two API methods: forward($x, \theta$) that returns $h(x, \theta)$, and backprop($x, \theta, g_h$) = $g_\theta$ that receives gradients $g_h = \frac{\partial L(h(x, \theta))}{\partial h(x, \theta)}$ of an arbitrary loss function w.r.t. model activations and returns the gradients of the same loss function with respect to the specified PEFT parameters, $g_\theta = \frac{\partial L(h(x, \theta))}{\partial \theta}$. We further assume that both forward(·) and backward(·) APIs are stateless and deterministic, i.e. calling the same API method multiple times (or on multiple servers) with the same inputs produces identical results. Thus, if the model uses dropout or any other form of non-determinism, we assume that clients provide the random seed as a part of $x$. Real-world fine-tuning APIs are not exactly nondeterministic due to hardware and software limitations. In principle, they can be made exactly deterministic at the cost of slower computation. However, this is not necessary, as our work does not rely on strict determinism up to numeric precision. Finally, fine-tuning APIs can provide several models and offer more than one PEFT algorithm, which we leave out of the scope of our analysis. To fine-tune a model with this API, a client can initialize adapters locally, alongside with a small task-specific “head”, then train both adapters and head on training minibatches. For each minibatch $(x, y) \in D$, a client calls forward($x, \theta$) to compute feature representations, then predicts with local “head” and computes task-specific loss function $L$. After that, a client performs backward pass: first, it computes gradients w.r.t. local head inputs $g_h = \frac{\partial L}{\partial h}$, then passes those gradients to a remote server via backward($x, \theta, g_h$) API call to compute gradients w.r.t. $\frac{\partial L}{\partial \theta}$. Finally, a client updates both $\theta$ and local “head” parameters using the optimizer of choice. Before building more advanced algorithms, let us analyze the privacy of client’s labels under standard fine-tuning. We consider an “honest, but curious” attacker model. This means that the server will faithfully run the forward and backprop computations as requested by the client without changing the results. Furthermore, we assume that servers are independent and do not communicate client’s data between each other. However, a server can recover client’s labels by performing arbitrary computations on top of any information it receives from the client. When training in this way, a client does not directly communicate training labels to the server. However, they do communicate inputs, adapter parameters, and gradients. Furthermore, the server communicates input representations that can be intercepted by a third party. In Figure 1, we train a DeBERTa-v2-XXL model on the SST-2 sentiment classification dataset. The top row depicts the gradients $g_h$ communicated by the client when calling backprop(·) at different training stages. In the bottom row, we similarly track activations $h(x, \theta)$ that server may compute based on the specified $x, \theta$. We defer further additional figures and details to Section 4.1. As we can see, both gradients and activations are arranged in such a way that simple k-means clustering would reveal which objects have the same label. The training activations (bottom row) do not reveal labels right away (at least not against this attack). However, they gradually “leak” private ![Figure 1](image-url) **Figure 1**: A visualization of top-2 principal components of gradients (top) and activations (bottom) from different fine-tuning steps (left to right). Color indicates the training labels (binary). label information during training. From an information-theoretic perspective, knowing just one vector of gradients or trained activations allows the attacker to learn all but one bit of information about client’s private labels. To summarize, leaving any one data source unprotected (gradients, activations or parameters) would already compromise label privacy. However, we found that gradients and activations require different means of protection. 3.2 Privacy-preserving backpropagation In this section, we formulate an algorithm for “anonymizing” the gradients communicated over a single training step with arbitrary PEFT type. Several prior works approach this by modifying the training objective or model architecture. However, when dealing with a real-world PEFT workload with optimized hyperparameters, changing the model or loss function often results in reduced model accuracy. Thus, we seek an algorithm that preserves both model and training objective. We design our algorithm based on an observation that backpropagation is conditionally linear in output gradients, even when the model itself is nonlinear. Formally, if we take a model \( h(\cdot, \cdot) \), a fixed set of trainable parameters \( \theta \) and input samples \( x \), the backprop “function” computes \( \text{backprop}(x, \theta, \frac{\partial L}{\partial h(x, \theta)}) = \frac{\partial L}{\partial \theta} \). For convenience, we shorten it to \( \text{backprop}(x, \theta, g_h) = g_\theta \), where \( g_h = \frac{\partial L}{\partial h(x, \theta)} \) represents the gradients of some objective function with respect to model activations (outputs), and \( g_\theta = \frac{\partial L}{\partial \theta} \) are gradients of the same objective function w.r.t. trainable parameters. In this notation, backprop is linear in terms of \( g_h \) for any fixed \( x, \theta \). This becomes self-evident if we view backprop as multiplying \( g_h \) by the Jacobian of model outputs w.r.t. trainable parameters, \( \frac{\partial h(x, \theta)}{\partial \theta} \). If \( x, \theta \) are constant, the Jacobian is also constant, and backprop is a linear operator: \[ \text{backprop}(x, \theta, \frac{\partial L}{\partial h(x, \theta)}) = \frac{\partial L}{\partial \theta} = \frac{\partial L}{\partial h(x, \theta)} \times \frac{\partial h(x, \theta)}{\partial \theta} \] This observation allows us to design a private backpropagation protocol. To illustrate this protocol, let us first consider a distributed API with two identical independent servers that offer backprop API. Then, for arbitrary vector \( \vec{z} \), we can rewrite: \[ \text{backprop}(x, \theta, \vec{g}_h) = \text{backprop}(x, \theta, g_h + \vec{z}) + \text{backprop}(x, \theta, g_h - \vec{z}) \] During API fine-tuning, we obtain \( \text{backprop}(x, \theta, g_h + \vec{z}) \) using an API call to server 1, whereas the second term \( \text{backprop}(x, \theta, g_h - \vec{z}) \) translates to an API call to server 2. Note that neither of two servers has access to the true gradient \( g_h \): they only receive the sum \( [\vec{z} + g_h] \). If we sample a large noise vector \( \vec{z} (\text{Var}(\vec{z}) \gg \|g_h\|_2^2) \), this sum becomes indistinguishable from noise. However, when both API calls finish, a client can add the result to recover the true \( g_\theta = \frac{\partial L}{\partial \theta} \). If both requests are processed by the same server, it can obviously recover \( g_h \) by adding up gradients from both calls, which leads us to the final step. Instead of generating a single noise vector, a client needs to generate (privately) a set of \( m > 1 \) random vectors \( \hat{g}_1, \ldots, \hat{g}_m \) and scalars \( \alpha_1, \ldots, \alpha_m \) such that \( g_h = \sum_{i=1}^{m} \alpha_i \cdot \hat{g}_i \). Then, for each \( \hat{g}_i \), client computes \( \text{backprop}(x, \theta, \hat{g}_i) \) as \( m \) parallel API calls. Once this is done, client recovers \( g_\theta = \sum_{i=1}^{m} \alpha_i \cdot \text{backprop}(x, \theta, \hat{g}_i) \). Note that the client does not reveal scalars \( \alpha_1, \ldots, \alpha_m \) to anyone. This procedure can allow client to safely compute gradients once, but, in practice, client usually needs to run many consecutive steps. This creates an additional vector of attack: if the same server receives two sets of parameters \( \theta_t, \theta_{t+1} \), they could potentially recover \( g_\theta \) by inverting the optimizer. In the simplest case, if the server somehow knows that the client computes \( \theta_{t+1} = \theta_t - \eta \cdot g_\theta \), then they can compute \( g_\theta = \frac{\theta_t - \theta_{t+1}}{\eta} \). While \( g_\theta \) does not necessarily leak private labels, a server could, in some cases, use \( g_\theta \) to recover \( g_h \), either fully (e.g. if Jacobian is invertible), or partially. --- 2The missing bit corresponds to attacker not knowing which cluster corresponds to label “1”. 3We validate that experimentally in [4,2]. The client has two ways to prevent this attack. The first one is to ensure that no single server runs backprop on two consecutive steps. This is easy to do in decentralized systems where there are many potential servers. However, even when there is a single server, they could be required to set up multiple trusted execution environments (Nvidia, 2023). A more risky alternative is to ensure that the gradients cannot be reversed from consecutive parameters: randomize initial optimizer statistics or add noise to parameters. This solution is easier, but it can adversely affect convergence in some cases. The resulting procedure is formulated in Algorithm 1. **Algorithm 1** private\_backprop - Privacy-Preserving backpropagation (from client’s perspective) **Input:** \( x \) inputs, \( \theta \) adapter weights, \( g_h \) gradients w.r.t. activations, \( m > 1 \) - number of passes 1: \( \hat{g}_h^1, \ldots, \hat{g}_h^m, \alpha_1, \ldots, \alpha_m = \text{obfuscate}(g_h, m) \) \( \triangleright \) s.t. \( \sum_{j=1}^{m} \alpha_j \cdot \hat{g}_h^j = g_h \) 2: for \( j = 1, \ldots, m \) do 3: \( \hat{g}_\theta^j = \text{backprop}(x, \theta, \hat{g}_h^j) \) \( \triangleright \) server computes \( \hat{g}_h^j \times \partial h / \partial \theta \) 4: end for 5: \( g_\theta = \sum_{j=1}^{m} \alpha_j \cdot \hat{g}_\theta^j \) Return: \( g_\theta \) To summarize, we formulated a procedure that allows a client to compute gradients privately for any given model and PEFT type. Furthermore, since eq. [2] recovers true gradients, this obfuscation method does not affect the training dynamics. However, as we have shown in Section 3.1, gradients are not the only source of privacy leakage. ### 3.3 Full fine-tuning The other major attack vector are training activations. As the model fits to training data, it’s intermediate activations \( h(x, \theta) \) allow attackers to recover labels. To combat this issue, we take advantage of the fact that PEFT has few trainable parameters. Instead of learning just one set of trainable parameters, a client creates \( n \) independent adapter sets \( \theta_1, \ldots, \theta_n \). Note that this does not require \( n \) unique servers: a single server can run multiple sets of adapters. Furthermore, a client can alternate between using different servers for the same adapters. During forward pass, the outputs of different adapters are mixed together using randomized mixing weights \( W \in \mathbb{R}^{n,d} \): \[ h'(x, \theta_1, \ldots, \theta_n) = \sum_{i=1}^{n} W_i \odot h(x, \theta_i) \] Overall, we design this model in such a way the combined model \( h' \) can predict the labels, but the adapters \( h(x, \theta_i) \) do not allow predicting these labels without knowing the mixing weights \( W \). The mixing weights are generated such that initial activations \( h'(x, \ldots) \) are equal to mean \( h(x, \cdot) \) for all \( x \). To achieve this, we generate \( W \) as follows: first, we generate \( n \cdot (n-1)/2 \) d-dimensional random vectors \( \xi_{i,j} \in \mathbb{R}^d \forall i \in [1,n], j \in [i+1,n] \). Then, we add them up in the following way: \[ W = \begin{pmatrix} \frac{1}{n} \vec{e} + \xi_{1,2} + \xi_{1,3} + \cdots + \xi_{1,n} \\ -\xi_{1,2} + \frac{1}{n} \vec{e} + \xi_{2,3} + \cdots + \xi_{2,n} \\ \vdots \\ -\xi_{1,n} - \xi_{2,n} - \xi_{3,n} - \cdots + \frac{1}{n} \vec{e} \end{pmatrix} \] Here, $\vec{e}$ stands for a vector of all ones. The purpose of these mixing weights is to ensure that the gradients w.r.t. individual $h(x, \theta_i)$ are obfuscated, but the averaged model behaves the same as regular PEFT adapter. To illustrate this, consider $n=2$ identical LoRA adapters $\theta_1, \theta_2$. During the first training step $h(x, \theta_1) = h(x, \theta_2)$. Therefore, $$h'(x, \theta_1, \ldots, \theta_n) = (1/2\vec{e} + \xi_{1,2}) \odot h(x, \theta_1) + (1/2\vec{e} - \xi_{1,2}) \odot h(x, \theta_2) = h(x, \theta_1)$$ (5) However, the two adapters will learn different functions as they receive different gradients. From the first update on, $h'$ will be equal to an average of adapter predictions. Finally, to ensure that individual adapters $h(x, \theta)$ do not accidentally “learn to leak” labels, we maintain this over the course of training with a privacy regularizer inspired by Ganin & Lempitsky (2015). This ensures that it is impossible to predict labels from individual adapters $h(x, \theta_i)$. Intuitively, on each training step, client fits $n$ linear “heads” that learn to predict labels $y$ from $h(x, \theta_i)$, then performs an adversarial update of $\theta_i$ to prevent the “head” from predicting $y$. Formally, each of $n$ “heads” minimize the same objective function as the full model. For instance, if the full model solves multi-class classification, each head is trained to minimize cross-entropy: $$\eta^*_i = \arg\min_{\eta_i} \sum_{x,y \in D} -y \cdot \log \frac{e^{(\eta_i, h(x, \theta_i))}}{\sum_k e^{(\eta_i, h(x, \theta_i))}},$$ where $y$ is one-hot encoding of the correct class. The whole adversarial update takes place locally on client’s side, using the same $h(x, \theta)$ it uses for the main training objective. The resulting procedure appears complicated but it typically takes negligible time compared to running the large pre-trained model $h(x, \theta)$. Furthermore, since adversarial “heads” are linear, minimizing the objective above is done with standard logistic regression solver. To summarize, our approach combines the two proposed ideas: we use the private backpropagation algorithm from Section 3.2 to protect the gradients, then trains a mixture of adapters in such a way that obfuscates learned activations leaking labels. The resulting procedure is described in Algorithm 2. In the next section, we will evaluate the efficacy of P3EFT on popular NLP benchmarks. 4 EXPERIMENTS The main goal of this study is to find a practical method of private fine-tuning that would scale to modern pre-trained transformers. To verify if P3EFT meets these criteria, we chose to evaluate it not on typical datasets used in split-learning (e.g. CIFAR10, Krizhevsky, 2009), but on fine-tuning recent pre-trained transformers on NLP benchmarks representative of real-world tasks. To that end, we chose two pre-trained models: DeBERTa-XXLarge (He et al., 2021) and Flan-T5-Large (Chung et al., 2022). We train these models to perform sentiment classification on SST-2 (Socher et al., 2013) and paraphrase identification on MRPC (Dolan & Brockett, 2005), both of which are parts of the GLUE benchmark (Wang et al., 2018). For each model, we train LoRA adapters with rank 8. To improve reproducibility, we reuse the recommended hyperparameters from Hu et al. (2022) for the two corresponding tasks. ![Figure 3: Gradients of cross-entropy w.r.t. LoRA parameters for DeBERTa-v2-XXLarge. The top row corresponds to normal backpropagation and the bottom row uses privacy-preserving backprop.](image-url) 4.1 Privacy of Gradients and Activations For this experiment, we train DeBERTa-XXLarge on SST-2 dataset using regular LoRA adapters. First, we train the model locally and track model activations $h$ and gradients w.r.t. those activations. We apply principal component analysis to them into 2-dimensions and visualize them in Figure 1. Similarly, we visualize gradients of individual per-sample loss functions w.r.t. LoRA parameters $\theta$ in Figure 3 (top row). As we mention earlier, a hypothetical attacker could easily recover private labels by performing K-Means clustering over any data source: activations, gradients w.r.t. activations, and as well as individual gradients w.r.t. parameters. Next, we run the same experiment using privacy-preserving backpropagation as defined in Section 3.2. We use $n = 2$ with noise variance set to 1000. As expected, we observed the same learning curve as with normal training. However, instead of sending gradients w.r.t. activations to the server, client uses a specially crafted random noise vectors that are not informative. In Figure 3 (bottom) we plot the same kind individual gradients as in the top row, except that we visualize the gradients computed by the first of the two servers. Finally, we train XGBoost [Chen & Guestrin, 2016] with default hyperparameters to predict labels given the noisy gradients (pre-PCA): the resulting classifier is able to fit the training data perfectly, but has at most 50.4% accuracy on a balanced test set. Figure 4: Combined PEFT accuracy and privacy evaluations. See detailed description in Section 4.2 4.2 Main fine-tuning experiments Next, we evaluate the full P^3EFT algorithm in the same setting. To control for task and model type, we consider three fine-tuning setups: DeBERTa-v2-XXLarge on SST-2, DeBERTa-v2-XXLarge on MRPC, and Flat-T5-Large on SST2. For each setup, we compare against three baselines: - **Distance Correlation (DC).** Our re-implementation of the distance correlation defense formulated in [Sun et al., 2022]. For this baseline, we tune $\alpha$ separately for each task. We tune $\alpha$ to maximize accuracy with a constraint that DC has same or comparable privacy as our algorithm. - **Training w/o LoRA adapters.** In this baseline, the client gathers $h$ activations once at the beginning, with no adapters, then proceeds to train local “head” layers on top of said activations. As a result, the algorithm cannot leak information about training labels except for what is stored in $X$. - **Training LoRA with no regularization** refers to training a single LoRA adapter normally. This baseline represents an upper bound on model accuracy, but lacks privacy. For each algorithm, we report task-specific metric (Accuracy or F1) as well as 3 privacy measures: - **Spectral attack** - vulnerability to attack proposed in [Sun et al., 2022], measured as classifier ROC AUC, lower is better privacy. - **Norm attack** - vulnerability to a variant of attack proposed in [Li et al., 2022], measured as classifier ROC AUC, lower is better. - **LogReg** - the cross-validation accuracy of logistic regression that was trained to predict class labels. Pessimistic estimate of privacy. Lower is better privacy. We report main fine-tuning results in Figure 4. Overall, P^3FT algorithm achieves nearly the same accuracy and outperforms Distance Correlation-based algorithm in terms of accuracy given the same privacy level. However, both P^3FT and DC can achieve different accuracy-to-privacy trade-offs depending on the value of the regularizer coefficient. To explore this, we also conduct sensitivity experiments where we vary the regularizer coefficients of both algorithms and report our findings in Figure 5. While both algorithms offer a wide range of configurations, P^3EFT offers slightly better trade-offs. We evaluate additional hyperparameter configurations in Appendix D. 5 Conclusion In this work, we analyze privacy-preserving fine-tuning of large neural networks in the context of parameter-efficient fine-tuning and the two-party split learning setting. We show that while standard fine-tuning suffers from label leakage even in the parameter-efficient case, it is possible to leverage the efficiency of PEFT to alter the procedure without any significant performance drawbacks. We test the resulting method, named P^3EFT, on a range of pretrained language models and multiple datasets, showing that it is competitive with a strong baseline in terms of label privacy while having higher task performance. In future work, it might be possible to explore alternative ways of using parameter-efficient fine-tuning to preserve privacy. REFERENCES Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel. Petals: Collaborative inference and fine-tuning of large models. *arXiv preprint arXiv:2209.01188*, 2022. URL https://arxiv.org/abs/2209.01188. Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. In *Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, KDD ’16, pp. 785–794, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4232-2. doi: 10.1145/2939672.2939785. URL http://doi.acm.org/10.1145/2939672.2939785. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models, 2022. URL https://arxiv.org/abs/2210.11416. William B. Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In *Proceedings of the Third International Workshop on Paraphrasing (IWP2005)*, 2005. URL https://aclanthology.org/I05-5002. Dreambooth API. Dreambooth API – Easily finetune Stable Diffusion and generate customised AI images — dreamboothapi.ai. [https://dreamboothapi.ai/](https://dreamboothapi.ai/) 2023. [Accessed 28-09-2023]. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In Francis Bach and David Blei (eds.), *Proceedings of the 32nd International Conference on Machine Learning*, volume 37 of *Proceedings of Machine Learning Research*, pp. 1180–1189, Lille, France, 07–09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/ganin15.html. Otkrist Gupta and Ramesh Raskar. Distributed learning of deep neural network over multiple agents. *Journal of Network and Computer Applications*, 116:1–8, 2018. ISSN 1084-8045. doi: https://doi.org/10.1016/j.jnca.2018.05.003. URL https://www.sciencedirect.com/science/article/pii/S1084804518301590. Karen Hambardzumyan, Hrant Khachatryan, and Jonathan May. WARP: Word-level Adversarial ReProgramming. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, pp. 4921–4933, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.381. URL https://aclanthology.org/2021.acl-long.381. Stephen Hardy, Wilko Henecka, Hamish Ivey-Law, Richard Nock, Giorgio Patrini, Guillaume Smith, and Brian Thorne. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption, 2017. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: Decoding-enhanced bert with disentangled attention. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=XPZlaoctsD. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pp. 2790–2799. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/houlsby19a.html. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=nZeVKeefYf9.
0oIkKERYhH
While the authors provide a simple convergence analysis for their generator-free training method, the provided analysis falls short of explaining why removing the generator yields improved results for graph tasks, diminishing the persuasiveness of the approach.
DOG: Discriminator-only Generation Beats GANs on Graphs Anonymous authors Paper under double-blind review Abstract We propose discriminator-only generation (DOG) as a generative modeling approach that bridges the gap between energy-based models (EBMs) and generative adversarial networks (GANs). DOG generates samples through iterative gradient descent on a discriminator’s input, eliminating the need for a separate generator model. This simplification obviates the extensive tuning of generator architectures required by GANs. In the graph domain, where GANs have lagged behind diffusion approaches in generation quality, DOG demonstrates significant improvements over GANs using the same discriminator architectures. Surprisingly, despite its computationally intensive iterative generation, DOG produces higher-quality samples than GANs on the QM9 molecule dataset in less training time. 1 Introduction Generative modeling approaches, such as denoising diffusion (Sohl-Dickstein et al., 2015; Ho et al., 2020) and GANs (Goodfellow et al., 2014; Sauer et al., 2023), have revolutionized content creation across various domains. Given their wide-ranging potential, it is essential to thoroughly explore their design space. GANs consist of a generator, which generates samples from noise vectors to match a real data distribution, and a discriminator, which assigns a realism score to distinguish between real and generated samples. Notably, the discriminator is discarded after training, leaving only the generator in use. Optimizing generator architectures requires substantial effort (Karras et al., 2019, 2020, 2021; Walton et al., 2022; De Cao & Kipf, 2018; Krawczuk et al., 2021), resulting in complex settings with multiple generators being used in conjunction in recent graph GANs (Martinkus et al., 2022). We explore DOG, an approach that removes the need for a generator altogether and instead directly leverages the information stored in a discriminator, aligning it conceptually with EBMs (Ackley et al., 1985; Xie et al., 2016, 2017; Du & Mordatch, 2019), and with refining generated samples using a discriminator (Tanaka, 2019). DOG uses only a single discriminator model for generation. It starts with a pure noise sample in the target domain and optimizes it directly by gradient descent w.r.t. a generation loss on the output of the discriminator. The generated samples, along with real samples, are then used to train the discriminator by alternating sample generation and discriminator updates, as shown in Figure 1. In contrast to GANs, where a generator and a discriminator act as adversaries, the DOG discriminator serves as its own adversary. Our primary contribution is demonstrating that DOG outperforms GANs in graph generation without necessitating adjustments to the discriminator architecture or the original GAN hyperparameters. For deeper insight, we conduct a convergence analysis of DOG and illustrate it on a 2D toy dataset. Algorithm 1 Pseudocode for DOG with Wasserstein loss and a batch size of 1. For a PyTorch version, see Algorithm 2 (Appendix). 1: function GO($D_\theta$, $x_{t}^{gen}$) ▷ Generation optimization 2: for $t \in [0, 1, \ldots, T - 1]$ do 3: $L_G \leftarrow D_\theta(x_{t}^{gen})$ 4: $x_{t+1}^{gen} \leftarrow \text{optimizer}_x(x_{t}^{gen}, \nabla_{x_{t}^{gen}} L_G)$ ▷ E.g., Equation (2) 5: end for 6: return $x_T^{gen}$ 7: end function 8: Randomly initialize discriminator parameters $\theta$ ▷ Training 9: while training not done do 10: $x_{\text{real}} \sim$ training set 11: $x_{\text{gen}} \leftarrow \text{GO}(D_\theta, \epsilon)$ with $\epsilon \sim \mathcal{P}$ 12: $L_D \leftarrow D_\theta(\text{sg}(x_{\text{gen}})) - D_\theta(x_{\text{real}})$ ▷ sg: stop gradient 13: $\theta \leftarrow \text{optimizer}_D(\theta, \nabla_\theta L_D)$ 14: end while 15: $x_{\text{gen}} \leftarrow \text{GO}(D_\theta, \epsilon)$ with $\epsilon \sim \mathcal{P}$ ▷ Inference 2 METHOD: DISCRIMINATOR-ONLY GENERATION GANs rely on two loss functions: $L_G$, which incentivizes the generator $G$ to generate samples $x_{\text{gen}}$ that receive high scores from the discriminator $D$, and $L_D$, which encourages $D$ to assign low scores to $x_{\text{gen}}$ and high scores to real samples $x_{\text{real}}$. An example is the Wasserstein loss (Arjovsky et al., 2017): $$L_G = -\mathbb{E}[D(x_{\text{gen}})] \quad \text{and} \quad L_D = \mathbb{E}[D(x_{\text{gen}})] - \mathbb{E}[D(x_{\text{real}})]$$ (1) For DOG, we reuse these loss functions (with batch averaging instead of expectation), but replace sample generation via a generator with a generation optimization (GO) process. GO aims to generate samples $x_{\text{gen}}$ that (locally) minimize $L_G$, akin to samples from a GAN generator. As there is no generator model $G$ in DOG, we refer to $L_G$ as a "generation" loss rather than a "generator" loss. GO depends on the current discriminator $D$ (initialized randomly before training) and a random starting sample $x_0^{\text{gen}} \sim \mathcal{P} = \mathcal{N}(0, I)$. We iteratively generate a sample $x_{t+1}^{\text{gen}} = x_T^{\text{gen}} = \text{GO}(D, x_t^{\text{gen}})$ using the gradient of $L_G$ with respect to $x_t^{\text{gen}}$. The sequence of samples $x_t^{\text{gen}}$ forms a GO path. We optimize for $T$ steps using gradient descent with a learning rate $\eta$: $$x_{t+1}^{\text{gen}} = x_t^{\text{gen}} - \eta \nabla_{x_t^{\text{gen}}} L_G(x_t^{\text{gen}})$$ (2) Using only the final $x_{\text{gen}}$ and a $x_{\text{real}}$, we train $D$ to minimize the discriminator loss $L_D(x_{\text{real}}, \text{sg}(x_{\text{gen}}))$, where sg stops the gradient. As outlined in Algorithm 1, this process is repeated for each training step. We stop the gradient to prevent collapse. Suppose we retained the gradients from GO for the discriminator weight update step and also required gradient computation for the weights of $D$. In that case, the computational graph through which we backpropagate for a training step would encompass all the $T$ forward and backward passes through $D$ from GO. This would lead to memory issues. Additionally, since $L_D$ compels $D$ to assign a lower score to the final generated sample $x_T^{\text{gen}}$, $D$ would have an advantage in altering its score surface to eliminate GO paths from noise to realistic samples. For instance, by giving zero gradient to noisy inputs, the generated samples would remain noisy and $D$ could easily assign a low score to them. Therefore, we do not consider any intermediate samples $x_t^{\text{gen}} (t < T)$ or their gradients for updating $D$. $D$ is also not conditioned on any timestep $t$, and the same $T$ is used for both training and inference. DOG provides flexibility in the choice of $\mathcal{P}$, $L_G$, and $L_D$. In practice, we may also incorporate learning rate scheduling in GO, and Equation (2) can be extended for advanced optimizers. 3 CONVERGENCE ANALYSIS To gain further insight, inspired by Mescheder et al. (2018), we provide a convergence analysis of DOG using a straightforward dataset and a simple \( D \). We show that in this setting, GO converges to local minima of \( L_G \) (inner convergence), and that the discriminator training also converges (outer convergence). Additionally, since EBMs are conceptually similar to DOG, we elucidate how an EBM would falter here. For an experimental validation of the convergence analysis and a formal examination of DOG, with a discussion of our assumptions, see Appendix A. We consider an underlying data distribution where samples lie on a regular 1-D grid, i.e., \( x_{\text{real}} \in \{ kn \mid n \in \mathbb{Z} \} \) (with a fixed scalar \( k \)). Suppose our training set only contains two real samples: \( x_0^{\text{real}} = 0 \) and \( x_1^{\text{real}} = 2\pi \). For the discriminator, we select \( D(x) = \cos(\theta x) \) with a single learnable scalar parameter \( \theta \). Assuming suitable hyperparameters \( T \) and \( \eta \), GO converges to a local minimum of \( L_G \) (maximum of \( D \)) from any starting sample \( x_0^{\text{gen}} \), i.e., \( x_0^{\text{gen}} \in \{ 2\pi n/\theta \mid n \in \mathbb{Z} \} \) such that \( L_G(x_0^{\text{gen}}) = -1 \) (with \( D(x_0^{\text{gen}}) = 1 \)), showing inner convergence. The expected value of the Wasserstein discriminator loss function can be computed as \[ E[L_D] = E[D(x^{\text{gen}})] - E[D(x^{\text{real}})] \] \[ = E[D(x^{\text{gen}})] - 0.5D(x_0^{\text{real}}) - 0.5D(x_1^{\text{real}}) \] \[ = E[1] - 0.5D(0) - 0.5D(2\pi) \] \[ = 1 - 0.5 - 0.5 \cos(\theta 2\pi). \] Differentiating \( E[L_D] \) with respect to \( \theta \) yields \( dE[L_D]/d\theta = \pi \sin(\theta 2\pi) \). Training with stochastic gradient descent on \( \theta \) converges to \( \theta = 1 \) if an appropriate learning rate is used and \( \theta \) is initialized sufficiently close, displaying outer convergence. If we choose a broad distribution \( P \) for \( x_0^{\text{gen}} \), DOG generalizes to the underlying grid data distribution because we can also generate samples that are not among the real samples. For instance, \( x_0^{\text{gen}} = \text{GO}(D, 4\pi + \epsilon) = 4\pi \) for \( x_0^{\text{gen}} = 4\pi + \epsilon \) and a small \( \epsilon \). Note that other values, such as \( \theta = 2 \), would also minimize \( L_D \) and can be reached by training with an appropriately initialized \( \theta \). The choice of the discriminator and the initialization leads to different generalization patterns. We could define an EBM by interpreting the model output as negative energy with corresponding density \( p(x) \propto \exp(\cos(\theta x)) \) (Du & Mordatchi, 2019). However, such an EBM would fail for this setting: Even if the modes with high density lie on the grid-points, the density between the grid points is never zero. Therefore, any faithful sampling approach will also yield invalid samples between the grid-points. In contrast, DOG’s GO only generates valid samples, showing an advantage of DOG over EBMs in this discrete setting. 4 EVALUATION We evaluate DOG on datasets from multiple domains, including a 2D toy dataset with 25 Gaussians (Tran et al., 2018) and several graph datasets such as QM9, a molecule dataset (Ramakrishnan et al., 2014). For a proof of concept with limited generation quality in the image domain, see Appendix B. Based on the results of early experiments, we generally use Adam (Kingma & Ba, 2015) as the optimizer for the GO process with \( T = 100 \) and a OneCycle learning rate schedule (Smith & Topin, 2019) (\( \text{max\_lr} = 1.0 \) and a warm-up ratio of 0.3) for faster convergence compared to constant learning rate gradient descent. An implementation can be found in the supplementary material. 4.1 25-Gaussians Toy Dataset To generate a 25-Gaussians dataset, we independently draw 2000 samples from a Gaussian with covariance \( \Sigma = I \times 0.0001 \) for each mean \( \mu \) in \([-1.5, -0.75, 0, 0.75, 1.5]^2\) to obtain 50,000 samples, similar to Tran et al. (2018). As in Wang et al. (2022), we use a discriminator with 2 hidden linear layers of 128 units each and LeakyReLU activations between them. We set \( \text{max\_lr} = 0.1 \) for the OneCycle in the GO process and train with a batch size of 128 for 50 epochs using Adam with \( \text{lr} = 10^{-5}, (\beta_1, \beta_2) = (0, 0.9) \), and Wasserstein losses (Arjovsky et al., 2017). Figure 2: Results of DOG on the 25-Gaussians toy dataset. (a) The background colors represent the discriminator scores, with brighter colors indicating higher scores. The black crosses represent the 50,000 real samples, while the white crosses represent 1,280 generated samples. The colored lines show a subset of the GO paths, which start at random locations $x_i^{\text{gen}}$ and end at the generated samples $x_T^{\text{gen}}$. (b) Typical GO loss curves, colored according to the paths shown in (a). Both the initial and final scores vary, and all GOs have converged. (c) Discriminator training loss of 25-Gaussians for DOG. A loss close to zero indicates that generated and real samples receive similar scores. As shown in Figure 2, DOG covers all modes of the data, and most of the generated samples are close to the real modes. Note that the discriminator scores (and the resulting loss surface) provide numerous GO paths to each mode, even though they are not perfectly symmetric, and some modes receive higher scores than others. In particular, the central mode at $(0, 0)$ is still covered, even though it receives a lower score than others, since GO often ends at local maxima. Furthermore, since the starting point density is $\mathcal{N}(0, I)$, which is highest at $(0, 0)$, we already initialize many GO paths close to this mode. 4.2 Graph Data To generate graphs with DOG, we directly optimize both the node feature matrix and the edge feature matrix (or adjacency matrix in the absence of edge features). After each generation step in GO, we apply constraints to the edge matrix by setting its diagonals to zero and ensuring symmetry. We also mask the edge matrix and the node feature matrix based on the given number of nodes for each batch element. This is similar to the post-processing step used by Martinkus et al. (2022) for SPECTRE, which is the current state-of-the-art for graph generation using GANs. Model Although DOG could use the more advanced discriminator of SPECTRE, which uses conditioning on eigenvalues and eigenvectors generated by a complex multi-step generator, for simplicity, we opt for the simple discriminator architecture of GG-GAN (Krawczuk et al., 2021). Specifically, we use the GG-GAN (RS)* reference implementation, introduced by Martinkus et al. (2022), with default hyperparameters. For example, for QM9, we use a discriminator with 3 PPGN layers (Maron et al., 2019), each with 64 channels, and a batch size of 128. Data For a detailed description of each dataset used for evaluation, data splits, and statistics, we refer to Martinkus et al. (2022). In short, we evaluate DOG on all the benchmark datasets on which SPECTRE was evaluated, including both artificial and real-world graphs. The artificial datasets are Community-small (You et al., 2018), with 12-20 nodes; SBM, which includes graphs sampled from stochastic block models; and Planar, containing only planar graphs. The two real-world datasets are Proteins (Dobson & Doig, 2003), containing proteins with hundreds of nodes, and QM9 (Ramakrishnan et al., 2014), an exhaustive dataset of molecule graphs with up to 9 heavy atoms. Metrics Following Martinkus et al. (2022), we generate as many graphs for each dataset as there are in the test set. For the set of generated graphs, we report the percentage of valid, unique, and novel (V, Table 1: QM9 results. As discussed by Vignac et al. (2023), novelty is a problematic metric for QM9. Therefore, we report it here only for completeness. | METHOD | V.† | V. & U. † | V., U. & N.° | |-----------------|-----------|-----------|--------------| | DiGReSS | **99.0** | ≈95.2 | - | | SPECTRE | 87.3 | 31.2 | 29.1 | | GG-GAN (RS)* | 51.2 | 24.4 | 24.4 | | DOG 3L. 64 CH. | 93.8±1.3 | 91.7±0.9 | 58.1±2.4 | | DOG 6L. 128 CH. | **98.9** | **95.8** | 42.0 | U. & N.) graphs, as well as the maximum mean discrepancy (MMD) for various statistics (Degree, Clustering, Orbit, Spectrum, and Wavelet) between the generated set and the test set, where applicable. We also report the average ratio among all statistics between the MMD of the generated set and the MMD of the training set, where a ratio of 1.0 indicates that the generated set is as distinguishable from the test set as the training set. For Community-small, we use the earth mover’s distance (EMD) instead of MMD to maintain consistency with Martinkus et al. (2022). Baselines As baselines, we compare the performance of DOG with those of SPECTRE and GG-GAN (RS)*, as reported by Martinkus et al. (2022). Note that they use the node number distribution of the test set to generate samples, which we also adopt for consistency. Additionally, we include GG-GAN* or MoiGAN* (De Cao & Kipf, 2018; Martinkus et al., 2022) where they perform better. Where available, we also include the results of DiGress (Vignac et al., 2023), a recent diffusion-based approach that does not use a discriminator and updates the graph in discrete steps. Note that Vignac et al. (2023) report the MMD ratios directly instead of the raw MMD values. Therefore, we calculate them using the training MMD values of Martinkus et al. (2022), which may introduce rounding errors. The comparability of the results is further limited because DiGress uses different splits for some datasets and uses the node distribution of the training set for sampling. Other Settings Like Martinkus et al. (2022), we train for 30 epochs for QM9 and 12,000 epochs for Community-small and Planar. Due to the expensive GO, we use only 130 epochs for Proteins (instead of 1,020) and 2,400 for SBM (instead of 6,000). Each run uses a single Nvidia A40, except for Proteins where we use 2 GPUs (and maintain a total batch size of 4 as in SPECTRE). Training on our default QM9 configuration takes about 15 hours. We keep the seeds constant as in the reference implementation and, like Martinkus et al. (2022), select a checkpoint for testing according to the validation performance. As the validation metric, we use the mean MMD ratio, except for QM9, where we choose the ratio of valid and unique generated graphs. Like Martinkus et al. (2022), we use WGAN-LP ($\lambda_{LP} = 5$) and gradient penalty as losses (Petzka et al., 2018). We train using Adam with $lr = 10^{-4}$ and $(\beta_1, \beta_2) = (0.5, 0.9)$. Results As shown in Table 1, DOG outperforms all GAN approaches on QM9, including GG-GAN (RS)* and SPECTRE, while being competitive with DiGress when using a larger discriminator (6 layers, 128 channels each), closing the substantial performance gap between diffusion models and discriminator-based GANs. For our main experiment with the default configuration, we report the mean and standard deviation for five seeds. Examples of generated graphs are provided in Figure 3. The results for other graph datasets also demonstrate DOG’s superior graph generation performance over GANs, as can be observed in Table 2 (Community-small), Table 3 (Planar), Table 4 (SBM, max_lr = 0.1), and Table 5 (Proteins, max_lr = 0.1). Notably, DOG achieves this high performance... Table 3: Planar results. | METHOD | DEG. ↓ | CLUS. ↓ | ORBIT ↓ | SPEC. ↓ | WAVELET ↓ | RATIO ↓ | V. ↑ | U. ↑ | N. ↑ | V. + U. & N. ↑ | |-------------|--------|---------|---------|---------|-----------|---------|------|------|------|----------------| | DiGress | ≈0.00028 | ≈0.0372 | ≈0.00085 | | | | 75 | | | | | SPECTRE | 0.0015 | 0.0785 | 0.0017 | 0.0112 | 0.0059 | 2.9 | 25.0 | 100.0| 100.0| 25.0 | | GG-GAN (RS)* | 0.1005 | 0.2571 | 1.0313 | 0.2040 | 0.3829 | 586.3 | 0.0 | 100.0| 100.0| 0.0 | | DOG (OURS) | 0.00023 | 0.0827 | 0.0034 | 0.0067 | 0.0029 | 2.78 | 67.5 | 100.0| 100.0| 67.5 | Table 4: SBM results. | METHOD | DEG. ↓ | CLUS. ↓ | ORBIT ↓ | SPEC. ↓ | WAVELET ↓ | RATIO ↓ | V. ↑ | U. ↑ | N. ↑ | V. + U. & N. ↑ | |-------------|--------|---------|---------|---------|-----------|---------|------|------|------|----------------| | DiGress | ≈0.00128 | ≈0.0498 | ≈0.04335| | | | 74 | | | | | SPECTRE | 0.0015 | 0.0521 | 0.0414 | 0.0056 | 0.0028 | 2.0 | 52.5 | 100.0| 100.0| 52.5 | | GG-GAN (RS)* | 0.0038 | 0.1019 | 0.0613 | 0.1749 | 61.5 | 0.0 | 100.0| 100.0| 0.0 | | GG-GAN* | 0.0035 | 0.0689 | 0.0587 | 0.0094 | 0.0202 | 7.8 | 25.0 | 100.0| 100.0| 25.0 | | DOG (OURS) | 0.0003 | 0.0508 | 0.0401 | 0.0039 | 0.0013 | 1.15 | 72.5 | 100.0| 100.0| 72.5 | Table 5: Proteins results. °Note that novelty is severely limited for MolGAN* as discussed by Martinkus et al. (2022). | METHOD | DEG. ↓ | CLUS. ↓ | ORBIT ↓ | SPEC. ↓ | WAVELET ↓ | RATIO ↓ | U. ↑ | N. ↑ | U. & N. ↑ | |-------------|--------|---------|---------|---------|-----------|---------|------|------|----------| | SPECTRE | 0.0056 | 0.0843 | 0.0267 | 0.0052 | 0.0118 | 16.9 | 100.0| 100.0| 100.0 | | GG-GAN (RS)* | 0.4727 | 0.1772 | 0.7326 | 0.4102 | 0.6278 | 875.8 | 100.0| 100.0| 100.0 | | MolGAN* | 0.0008 | 0.0644 | 0.0081 | 0.0021 | 0.0012 | 4.2 | 97.3 | 100.0| 97.3 | | DOG (OURS) | 0.0022 | 0.0682 | 0.0202 | 0.0014 | 0.0023 | 6.75 | 100.0| 100.0| 100.0 | Using only the simple GG-GAN (RS)* discriminator, and we did not tune any GAN-specific hyperparameters. We only tuned the DOG-specific max_lr and kept T = 100 constant. For examples of both real and generated graphs, see Figure 10 (Appendix). Hyperparameter Study We evaluate the impact of our design choices on DOG’s performance on QM9. In Table 6, we compare against the default model w.r.t. the fraction of valid and unique graphs. We also analyze the discriminator’s smoothed Wasserstein loss \( L_D \) at the end of training, as we observed in early experiments that a low loss correlates with poor samples. This suggests that it is undesirable for GO if the discriminator can easily distinguish between generated and real samples. First, we investigate the model size by doubling the number of channels and layers, both independently and together. Models larger than those used by the GAN baselines perform better. Next, we examine our choices for GO: While increasing the number of steps \( T \) may result in better convergence to local minima of \( L_G \) (maxima in \( D \)'s score surface), it also increases the computational cost of DOG, posing a trade-off. Doubling the number of steps to \( T = 200 \) results in higher discriminator losses but similar quality, indicating that \( T = 100 \) is sufficient. In contrast, halving the number of steps to \( T = 50 \) leads to worse performance. An even lower number of steps (\( T = 25 \)) leads to a failure mode where GO ends before the discriminator can be fooled, giving the discriminator a low loss. Reducing the learning rate in GO by a factor of 10 leads to only slightly worse results, indicating that GO is not overly sensitive to learning rates, at least when using Adam with OneCycle learning rate scheduling as the optimizer. However, not using learning rate scheduling (but tuned \( lr = 0.1 \)) significantly reduces the sample quality, suggesting that learning rate scheduling is crucial for GO, as again implied by the low discriminator loss. Replacing Adam with stochastic gradient descent (SGD) in the GO process performs similarly to the default setting but requires more learning rate tuning (\( max_lr = 100 \)). Not constraining the edge matrix after each GO step has a negative effect, demonstrating the benefit of staying within the data domain during GO. Table 6: Hyperparameter study for DOG on QM9. Each change to the default setting is indicated in the first column. Default: \( T = 100 \), OneCycle (\( max_lr = 1.0 \)), Adam, 3 layers, 64 channels, WGAN-LP losses. °Not comparable because different loss functions or number of epochs are used. | SETTING | V. & U. ↑ | FINAL \( L_D \) | |------------------|-----------|-----------------| | DEFAULT DOG | 91.7±0.9 | -0.17±0.04 | | 128 CH. | 93.8 | -0.08 | | 6 LAY. | 94.3 | -0.21 | | 128 CH., 6 LAY. | 95.8 | -0.14 | | T=200 | 92.0 | -0.07 | | T=50 | 81.3 | -0.79 | | T=25 | 34.1 | -44 | | MAX_LR = 0.1 | 90.2 | -0.29 | | ADAM → SGD | 91.1 | -0.16 | | NO ONECYCLE | 56.9 | -15 | | NO CONSTRAINT | 81.8 | -0.34 | | NS LOSSES | 84.4 | 1.27° | | 1 EPOCH TRAINING | 43.4 | -0.04° | Replacing WGAN losses with non-saturating (NS) GAN losses (Goodfellow et al., 2014; Karras et al., 2019) (but keeping $\lambda_{LP}$ and other hyperparameters) leads to only a moderate degradation, supporting our claim that DOG, like GANs (Qin et al., 2018), is flexible w.r.t. the choice of $L_D$ and $L_G$. While individual training steps of DOG take more time due to the iterative GO, we find that the overall training nevertheless progresses faster compared to SPECTRE. Although SPECTRE trains by default for 30 epochs ($\approx 8$ hours) on QM9, DOG achieves better test performance after only one epoch (< 0.5 hours), indicating faster convergence. 5 RELATED WORK GANs & EBMs From a GAN perspective (Goodfellow et al., 2014), DOG can be obtained by removing the generator model and, to generate samples, replacing it with GO, a multi-step gradient-based optimization utilizing the current discriminator. For training, the generator optimization step is removed without replacement, as only one model, the discriminator, remains. In the discriminator update step, the generated samples and real samples are processed in the same way as for GANs. In EBMs (Ackley et al., 1985; Xie et al., 2016; 2017; Du & Mordatch, 2019), an energy estimation model is trained to assign low energy scores to real samples and high energy scores to generated samples, essentially using a Wasserstein loss (Arjovsky et al., 2017). A main difference between DOG and EBMs lies in the sampling procedure, particularly in the absence of noise. EBMs interpret the model’s output as energy that determines the (unnormalized) density at each location. An ideal sampler would therefore draw samples from this density. An example of such a sampler is Langevin dynamics (LD) (Welling & Teh, 2011), which utilize noise and the gradient of the energy score with respect to the input sample to update the input sample in several steps. Potentially, such a sampling approach can be paired with a sample replay buffer (Du & Mordatch, 2019) to facilitate faster mixing of the underlying Markov chain and therefore save expensive generation steps during training. In practice, however, the sampling from the density is often not perfect: LD may use a limited number of steps (Nijkamp et al., 2019), or smaller noise than theory requires (Xiao et al., 2021), introducing a distribution shift. However, without any noise in the LD (Xiao et al., 2021) term this “EBMs w/ noise-free dynamics” and Nijkamp et al. (2019) argue that the gradient dominates the noise in certain settings), we perform gradient descent on the energy model’s input and, assuming convergence, only obtain samples that lie at local minima of the energy surface. Therefore, we no longer attempt to acquire samples accurately following the energy-defined density. This minimization is akin to minimizing a generation loss with Wasserstein GAN losses, as also pointed out by Xiao et al. (2021). Due to this difference in acquired samples, we abandon the “EBMs w/ noise-free dynamics” (Xiao et al., 2021) interpretation and instead consequently interpret it as a special case of DOG. This shift in interpretation from imperfect sampling from an energy-defined density via impaired LD (or any other technique) to optimizing the sample by locally minimizing a generation loss (maximizing a discriminator’s score) motivates the other distinctions between EBMs and DOG: Namely, we can employ tools developed for efficient gradient-based optimization, such as momentum-based optimizers and learning rate schedulers. Also, GAN losses, discriminator architectures, regularization techniques such as gradient penalty and other hyperparameters largely transfer, as demonstrated in our graph experiments. Furthermore, local minima can possess the same sampling density even if they receive different absolute scores from the discriminator/energy model (refer to Figure 2). Finally, a sample replay buffer is not required since DOG aims to converge to a local minimum instead of striving for a mixed Markov chain. Overall, DOG combines concepts from both GANs and EBMs. It repurposes GAN loss functions and discriminator architectures but generates samples iteratively using a single scalar prediction model like EBMs. Unlike GANs, DOG does not require a separate generator architecture and training process, and unlike EBMs, it does not involve sample replay buffers for training or random noise in the generation process. Instead, DOG employs direct gradient descent with respect to a generation loss, potentially combined with momentum-based optimizers and learning rate scheduling. EBMs and GANs are also related. While they can be combined (Zhao et al., 2017; Pang et al., 2020; Che et al., 2020) show how a GAN can be interpreted as an EBM, and Xiao et al. (2021) indicate that EBMs are self-adversarial like DOG. Additionally, Tanaki (2019) and Ansari et al. (2020) demonstrate how to use the gradient of a discriminator to refine the generation, which may come from a GAN generator. **Other Generative Modeling Approaches** Other approaches related to EBMs and DOG include score-based generative modeling (Song & Ermon, 2019; Song et al., 2021) and denoising diffusion approaches (Sohl-Dickstein et al., 2015; Ho et al., 2020). All of these methods generate samples incrementally by making dense predictions on the input, starting from noise. They allow for corrections of previous inaccuracies during generation, unlike GANs, which generally generate samples in a one-shot fashion. Diffusion models typically predict the remaining noise, while score-based models estimate the gradient of the energy surface. Unlike EBMs and DOG, the dense prediction in these methods does not come from a backward pass through a scalar prediction model but from a forward pass through a dense prediction model. In addition, these settings do not require expensive iterative sample generation during training but cheaply noise real samples. For inference, the updates of denoising diffusion follow a pre-determined schedule similar to a learning rate scheduling in GO. Further, Diffusion-GAN (Wang et al., 2022) and discriminator guidance (Kim et al., 2022) combine ideas from diffusion and GANs by using a discriminator to refine partially denoised samples generated by a diffusion model. Complementary to our work, Franceschi et al. (2023) provide a formal connection between GANs and score-based diffusion. DOG is similar to a particle model. However, the DOG discriminator is not conditioned on timesteps, and the samples may be updated with momentum and learning rate scheduling instead of just the gradients. In contrast to DOG, their Discriminator Flow approach requires this conditioning for the discriminator and uses the intermediate samples for training, making DOG conceptually simpler. Besides these, a range of other popular generative approaches have been explored, including normalizing flows (Rezende & Mohamed, 2015; Kobyzev et al., 2019; Xie et al., 2022), variational autoencoders (Kingma & Welling, 2014; Vahdat & Kautz, 2020), autoregressive generation (Brown et al., 2020; Lee et al., 2022), and masked modeling (Chang et al., 2022; 2023). **Adversarial Attacks** DOG is also related to adversarial robustness (Song et al., 2018; Dong et al., 2020): Both settings use the gradient of a model’s input to perturb samples, resulting in a change in the model’s predictions. GO is essentially an unrestricted adversarial attack on the discriminator \(D\) starting at noise, as it aims to maximize \(D\)'s scores. \(D\) is trained to be robust to these attacks. However, the goal of DOG is different: it aims at de novo content generation and receives only noise as input during inference, no adversarially perturbed samples. In contrast, adversarial reconstruction attacks (Balle et al., 2021; Haim et al., 2022) aim to generate realistic samples, like DOG. While reconstruction attacks aim to retrieve training samples, DOG’s goal is to generalize. **Graph Generation** Specifically for the graph domain, besides the multi-step GAN-based SPECTRE (Martinkus et al., 2022) and the state-of-the-art diffusion-based approaches such as DiGress (Vignac et al., 2023) or follow-ups (Chen et al., 2023; Kong et al., 2023), other methods have been proposed. These include score-based generative modeling (Niu et al., 2020; Jo et al., 2022) and autoregressive models (Liao et al., 2019). Earlier graph GANs are based on recursive neural networks (You et al., 2018) or random walks (Bojchevski et al., 2018). Other approaches utilize normalizing flows (Madhawa et al., 2020) or focus on permutation invariance (Vignac & Frossard, 2022). While these approaches are tailored to graphs, sometimes even limited to molecule graphs, DOG is a more general approach that works well on graphs. ### 6 DISCUSSION **Training Efficiency** DOG outperforms approaches like SPECTRE (Martinkus et al., 2022) on graphs without requiring an elaborate multi-step approach where some generators and discriminators need to be trained before others. This greatly simplifies the setting by using only a single discriminator model. Moreover, unlike the extensive journey of different generators from GraphGAN (Wang et al., 2018) to SPECTRE, there is no need to tune the generator architecture for DOG. Furthermore, DOG has an advantage over DiGress in that it requires fewer parameters and significantly fewer training epochs (Vignac et al., 2023). For QM9, DiGress was trained for 1000 epochs using an expensive transformer model with 4.6 million parameters. In contrast, DOG achieves the same sample quality with only 30 epochs and 1.1 million parameters for the large discriminator configuration (and merely 145 thousand parameters for the default configuration). Another conceptual advantage of DOG is its ability to instantly update GO with $D$, directly taking advantage of the adjusted discriminator weights from the previous training step, potentially providing high-quality training samples (namely those that maximally fool $D$) to $D$ early on. This is in contrast to GANs, where the generator model must first learn from the discriminator and always lags behind. As a result, DOG may require fewer training steps to achieve good performance, as demonstrated on QM9 (see Section 4.2). This observation suggests that in DOG, inner (generation) and outer (training) optimization are closely related. Nevertheless, while DOG shows advantages over GANs on graphs regarding the outer optimization, one of its main limitations is the currently expensive training steps that include GO in the inner optimization. A detailed analysis of the computational cost can be found in Appendix C. A potential solution to improve training efficiency is to reuse generated samples during training. By using a sample replay buffer, such as (Du & Mordatch, 2019), the starting point for GO could be closer to realistic samples, thus requiring fewer GO steps to achieve convergence. Another approach could be to use the intermediate samples and their gradients, instead of discarding them altogether with the stop gradient operator, to regularize and potentially shorten the GO paths. Additionally, we expect that more hyperparameter tuning for Adam and OneCycle, or using a more suitable GO approach altogether, could further reduce the number of steps required. By analogy, note that earlier denoising diffusion approaches used thousands of steps, which could be reduced to only a handful by skipping steps and increasing the step size (Meng et al., 2022). **GO Paths** While DOG is seemingly more parameter-efficient than GANs since it does not require the parameters of a generator, the DOG discriminator has a different task to solve. Given a broad distribution $\mathcal{P}$, GO can start anywhere and, assuming appropriate hyperparameters, will eventually reach all local maxima in the score surface of $D$. Thus, a good discriminator should have local maxima only at realistic samples and should be robust to the adversarial attack GO performs. This observation suggests that the score surface of a DOG discriminator must provide meaningful gradients over the entire target domain for the GO path from each noisy sample to a realistic one. This is in contrast to the score surface of a GAN discriminator, which only needs to provide meaningful gradients on the subset of the domain currently covered by the generator. The generator is restricted by its architecture and its current parameters and therefore many local maxima in the GAN discriminator’s score surface are not sampled and $L_D$ is not computed at these locations. For example, for face image data, after teaching a generator the global structure of a face, a GAN discriminator could adapt to focus on the local texture but struggle with less structured data (as can be seen in Appendix B.1). Therefore, we speculate that a DOG discriminator might benefit from an architecture and regularization techniques that are optimized to accommodate this difference. Assuming good performance, where the distribution of generated samples largely follows the real data distribution, the GO paths must be meaningful. The emergence of these paths is non-trivial: it is not enough to have realistic samples at local maxima; they must also be reachable via the GO paths with an appropriate probability. In particular, the loss function $L_D$ only applies directly to the real and generated samples, not to distant locations (i.e., $x_i^{\text{gen}}$). However, the gradient at these locations may already determine the general direction of many GO paths. For larger settings than the one described in Section 3, investigating how these paths emerge, such that the generated samples follow the underlying data distribution using only $L_D$, is left for future work. Potentially, these paths emerge in a manner similar to the paths taken by adversarial reconstruction attacks in classification models (Haim et al., 2022). ## 7 Conclusion Overall, DOG’s ability to provide high-quality training samples directly to the discriminator $D$, along with its simplified setting that eliminates the need to tune a generator architecture, makes it a promising generative approach. On graphs, we demonstrate that high-quality sample generation is possible with the conceptually simple self-adversarial approach, outperforming GANs while being on par with state-of-the-art diffusion approaches. Further improvements to the discriminator architecture, regularization techniques, and DOG’s speed may also enable competitive performance in other domains. REFERENCES David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. *Cognitive science*, 9(1):147–169, 1985. David Alvarez-Melis, Vikas Garg, and Adam Tauman Kalai. Why gans are overkill for nlp. *arXiv preprint arXiv:2205.09838*, 2022. Abdul Fatir Ansari, Ming Liang Ang, and Harold Soh. Refining deep generative models via discriminator gradient flow. In *International Conference on Learning Representations*, 2020. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In *International conference on machine learning*, pp. 214–223. PMLR, 2017. Borja Balle, Giovanni Cherubin, and Jamie Hayes. Reconstructing training data with informed adversaries. In *NeurIPS 2021 Workshop Privacy in Machine Learning*, 2021. URL https://openreview.net/forum?id=Yi2DZTbnBl4. Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, and Stephan Günnemann. Netgan: Generating graphs via random walks. In *International conference on machine learning*, pp. 610–619. PMLR, 2018. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum?id=Blxsqj09Fm. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 11315–11325, 2022. Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. *arXiv preprint arXiv:2301.00704*, 2023. Tong Che, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, and Yoshua Bengio. Your gan is secretly an energy-based model and you should use discriminator driven latent sampling. *Advances in Neural Information Processing Systems*, 33:12275–12287, 2020. Xiaohui Chen, Jiaxing He, Xu Han, and Liping Liu. Efficient and degree-guided graph generation via discrete diffusion modeling. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), *International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA*, volume 202 of *Proceedings of Machine Learning Research*, pp. 4585–4610. PMLR, 2023. URL https://proceedings.mlr.press/v202/chen23k.html. Nicola De Cao and Thomas Kipf. MolGAN: An implicit generative model for small molecular graphs. *ICML 2018 workshop on Theoretical Foundations and Applications of Deep Generative Models*, 2018. Paul D Dobson and Andrew J Doig. Distinguishing enzyme structures from non-enzymes without alignments. *Journal of molecular biology*, 330(4):771–783, 2003. Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, and Jun Zhu. Benchmarking adversarial robustness on image classification. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 321–331, 2020. Yilun Du and Igor Mordatch. Implicit generation and modeling with energy based models. *Advances in Neural Information Processing Systems*, 32, 2019.
NrI1OkZkiy
W1: Only five baseline and focuses on the years 2019 to 2021, and there is no comparison of models from the last two years (Convergence and Enhancement, although shown as 2023 in the citation, are actually work from 2021)
NDIM: Neuronal Diversity Inspired Model for Multisensory Emotion Recognition Anonymous authors Paper under double-blind review Abstract Without cross-sensory interaction, a key aspect of multisensory emotion recognition, traditional deep learning methods exhibit inferior performance in this task. On the contrary, the human brain possesses an inherent and remarkable capacity for multisensory recognition. Its diverse neurons exhibit distinct responses to sensory inputs, thus facilitating cross-sensory interaction. Leveraging this superiority, we propose the Neuronal Diversity Inspired Model (NDIM), which incorporates both unisensory and multisensory neurons, aligning with the human brain. To mirror the diverse response characteristics exhibited by various neurons, we introduce innovative connection constraints to regulate feature transmission between neurons. Drawing inspiration from this novel concept of neuronal diversity, our model exhibits biological plausibility, facilitating more effective emotion recognition of multisensory information. Experiments on the RAVDESS and eNTER-FAVE’05 datasets show that the NDIM achieves the best accuracy of 99.63% and 98.45%, respectively, demonstrating the potential of neuronal-diversity-inspired approaches in advancing multisensory interaction and emotion recognition. 1 Introduction Multisensory emotion recognition is an emerging technique that demonstrates superior performance compared to unisensory recognition due to its ability to mitigate non-robustness observed in unisensory recognition (Sun et al., 2008). Thus this technique has gained significant attention across diverse fields such as human-computer interaction (Abdullah et al., 2021), emotion regulation (Liu et al., 2019), and the diagnosis of emotion-related diseases (Widge et al., 2018). To further enhance emotion recognition performance, multisensory fusion is a crucial approach that effectively exploits complementary information and accentuates the most relevant details (Ranganathan et al., 2016). With the advancements in deep learning, an increasing number of methods have been employed for fusing multisensory information. Diverse approaches, including Convolutional Neural Network (CNN) based, Recurrent Neural Network (RNN) based, and hybrids with other algorithms, have been employed to accomplish fusion, yielding promising outcomes in multisensory emotion recognition tasks (Zhang et al., 2017; Tzirakis et al., 2017; Tang et al., 2017; Huan et al., 2021). Furthermore, deep generative models, such as variational autoencoders (VAEs) (Zhou et al., 2018; Wang et al., 2022), and generative adversarial networks (GAN) (Luo et al., 2019; Ma et al., 2022), exhibit outstanding performance due to their superior feature representation capabilities (Suzuki & Matsuo, 2022). Nevertheless, the majority of these methods fail to consider the interaction between different senses, a crucial aspect in multisensory fusion and recognition (Mansouri-Benssassi & Ye, 2020b). In contrast to the aforementioned approaches, large language models like MulT (Tsai et al., 2019), which are based on Transformer architecture, facilitate cross-sensory interaction through diverse attention mechanisms. However, concerns persist regarding their dependence on extensive computation and large datasets, resulting in diminished computational efficiency and elevated training costs (Shin et al., 2022). Given the limitations of prior methods, there is an urgent need for a multisensory recognition model that enables intricate cross-sensory interaction while minimizing computation and data requirements. The human brain possesses an innate and remarkable ability to perceive and recognize the external environment by efficiently utilizing multisensory information, including vision and hearing (McDonald et al., 2001; Oshiro et al., 2011). To provide machines with similar advantages, the Convergence and the Enhancement model are proposed for effective emotion recognition from multisensory information. However, these models have been shown to be inadequate in achieving cross-sensory interaction. The superior multisensory emotion recognition abilities of the human brain are underpinned by the cross-sensory interaction, as evidenced by cognitive neuroscience research (Alvarado et al., 2007). During this interaction, information from different senses can complement and mutually influence one another, facilitating efficient and comprehensive recognition (Holmes, 2007). Specifically, the multisensory interaction of the human brain is attributed to diverse neurons, including both unisensory and multisensory neurons (Stevenson et al., 2014). These neurons exhibit distinct responses to sensory inputs, with unisensory neurons specifically responding to single sensory information, while multisensory neurons respond to inputs from multiple senses (Stein & Stanford, 2008). The human brain effectively captures cross-sensory interaction due to these neurons’ distinct response characteristics, thereby enabling superior multisensory emotion recognition abilities (Laurienti et al., 2005). Based on the facilitation of diverse neurons, including unisensory and multisensory neurons, in multisensory emotion recognition of the human brain, we propose the Neuronal Diversity Inspired Model (NDIM) for multisensory emotion recognition through Spiking Neural Networks (SNN). Our model incorporates the novel concept of neuronal diversity, making it biologically plausible and enabling more effective multisensory emotion recognition. This research presents several significant contributions, summarized as follows. Firstly, in alignment with the observed neuronal diversity in the human brain, the NDIM incorporates both unisensory and multisensory neurons to effectively model and learn cross-sensory interaction. Secondly, pioneering special connection constraints are designed to regulate feature transmission within the NDIM, reflecting the different response characteristics of diverse neurons. Thirdly, we evaluate the NDIM on the RAVDESS and eENTERFAVE’05 datasets, showing that the NDIM achieves the best accuracy of 99.63% and 98.45%, respectively, consistently outperforming state-of-the-art brain-inspired approaches. 2 RELATED WORK In the context of multisensory emotion recognition, some brain-inspired models have been proposed to effectively recognize the information from multiple senses. The Convergence model (Benssassi & Ye, 2023) draws inspiration from the convergence theory, which posits that information from distinct senses converges in higher-order brain regions where it is fused and recognized (Stein & Meredith, 1993). In this model, a convergence layer is built to recognize the concatenated features extracted from the visual and auditory senses, serving as the higher-order region. Before the convergence layer, two separate layers extract sense-specific features. Unfortunately, this model lacks interaction between the individual sensory inputs, leading to suboptimal performance. The Enhancement model (Benssassi & Ye, 2023) is inspired by the enhancement theory that highlights the impact of visual information on auditory cortex activity (Molholm et al., 2002; Jessen & Kotz, 2013). In this model, the auditory feature extraction layer receives inputs not only from the auditory input layer but also from the visual layer. Regrettably, the model solely accounts for unidirectional connections from the visual sense to the auditory sense, thus falling short of achieving comprehensive interaction between the visual and auditory senses. The Synch-Graph model (Mansouri-Benssassi & Ye, 2020a) considers the interaction, incorporating the concept of neural synchrony. Neural synchrony refers to the simultaneous neural oscillations of distinct groups of neurons connected by synapses, and it is considered a facilitator of multisensory interaction (Stein, 2012). In the Synch-Graph model, bidirectional connections are established between visual and auditory neurons, and a graph is employed to represent the neural synchrony among these neurons. Graph Convolutional Networks (GCN) are utilized to classify the graphs. However, this model exhibits limited robustness in the presence of noise. In summary, the first two brain-inspired models have proven inadequate in achieving multisensory interaction. Although the Synch-Graph model enables interaction, it comes at the expense of limited robustness. Consequently, there is a pressing need to explore novel brain mechanisms to serve as inspiration for developing a model capable of effectively achieving interaction and emotion recognition. 3 Neuronal Diversity of the Human Brain In the human brain, efficient processing and recognition of multisensory emotion rely on diverse neurons that exhibit distinct responses to sensory inputs. This section provides an introduction to these neurons, their response characteristics, and their role in facilitating cross-sensory interaction. Superior Colliculus (SC) (Cuppini et al., 2011b), the posterior Superior Temporal Sulcus and Gyrus (STS/STG) (Engel et al., 2012; Chabrol et al., 2015), are responsible for multisensory emotion recognition. Within these regions, a variety of neurons, including multisensory neurons and unisensory neurons, are prevalent. Multisensory neurons are defined as neurons that respond to stimuli from more than one sense, while unisensory neurons respond exclusively to a single sense (Fetsch et al., 2013). Unisensory neurons can be further categorized into visual-specific and auditory-specific neurons (Stevenson et al., 2014). Multisensory neurons demonstrate significantly greater responses to multisensory stimuli that share a common source compared to any single sense stimuli (Cuppini et al., 2011a). On the other hand, unisensory neurons exhibit no significant changes in response when presented with multisensory stimuli versus single-sensory stimuli. Due to these response characteristics, unisensory neurons and multisensory neurons exhibit different levels of activation when the human brain receives multisensory input (Stein & Stanford, 2008). Moreover, when a neuron receives and activates in response to information, it selectively transmits this information to target neurons based on its type. As a result, information from different senses can be directionally transmitted between these neurons, influencing and complementing each other, thereby achieving cross-sensory interaction and facilitating multisensory emotion recognition (Allman et al., 2009). 4 Proposed NDIM Model Building upon the advantages of neuronal diversity in cross-sensory interaction, we present our model in Figure 1. The NDIM consists of three modules: the Unisensory Processing Module, the Neuronal Diversity Module, and the Interaction Module. The Neuronal Diversity Module determines neuron types based on their spiking patterns and devises unique connection constraints to facilitate multisensory interaction and emotion recognition in the Interaction Module. In the subsequent sections, we present the overarching framework of our model (Section 4.1), followed by the introduction of its key components: the Neuronal Diversity Module (Section 4.2) and the Interaction Module (Section 4.3). 4.1 Overall Architecture In this model, we initially process the unisensory data in the Unisensory Processing Module to extract semantic features. These extracted features then converge in the Interaction Module for interaction and emotion recognition. To closely emulate the neuronal diversity in the human brain, we design the Neuronal Diversity Module to enable comprehensive interaction through various types of neurons and specific connection constraints. We investigate two sensory modalities in this study: visual (denoted by the superscript \(^v\)) and auditory (denoted by the superscript \(^a\)). The input data for each sense is denoted as \(X^{(v)} \in \mathbb{R}^{t^{(v)}} \times d^{(v)}\) and \(X^{(a)} \in \mathbb{R}^{t^{(a)}} \times d^{(a)}\) for every sample. Here, \(t^{(*)}\) and \(d^{(*)}\) represent the time dimension and feature dimension, respectively. Initially, the input data undergoes pre-processing and feature extraction, resulting in the primary features of each modality, denoted as \(F_p^{(v)}\) and \(F_p^{(a)}\). These primary features are then further processed as semantic features, denoted as \(F_s^{(v)}\) and \(F_s^{(a)}\), through a spiking convolution layer and a pooling layer. Subsequently, the multisensory features \(F_m\) are formed by concatenating the semantic features from the visual and auditory senses. The multisensory features are then passed through the Interaction Module for further recognition. The Interaction Module consists of two hidden layers and a readout layer. The hidden layers comprise three types of neurons: unisensory neurons for the visual sense, unisensory neurons for the auditory sense, and multisensory neurons. Specific constraints are designed to govern the connections between different types of neurons, regulating the transmission of features across these layers. Figure 1: The overall architecture of the proposed NDIM model that is inspired by the neuronal diversity. Three modules are included in the NDIM: the Unisensory Processing Module, the Neuronal Diversity Module, and the Interaction Module. The gray rectangular area accommodates two networks responsible for unisensory emotion recognition. Once trained, the Neuronal Diversity Module receives the spiking patterns of the internal neurons from these networks. The Neuronal Diversity Module determines the neuronal types based on their spiking patterns and designs unique connection constraints to enable the Interaction Module to achieve multisensory interaction and emotion recognition. Notably, each layer of the Interaction Module comprises distinct neuron types. To achieve a unique transmission of sensory information between these neurons, the weights and connection constraints undergo Hadamard product computations. The neuron model employed in the NDIM architecture is the classical leaky integrated-and-fire (LIF) model. To enhance the learning capacity of the network, neuronal plasticity (Jia et al., 2021) is considered. In this regard, the neuron model incorporates an adaptive firing threshold determined by an ordinary differential equation. For instance, the update of the membrane potential for the $i$-th neuron in the $l$-th hidden layer can be illustrated as follows: $$C \frac{dV_i(t)}{dt} = g(V_i(t) - V_1)(1 - S_i(t)) + \sum_{j=1}^{n_l} \left(W_{i,j}^{(l)}\right) F_s^{(m)} - \gamma a_i(t)$$ (1) $$if(V_i(t) = V_{th}), \begin{cases} V_i(t) = V_2, \\ S_i(t) = 1, \end{cases}$$ (2) $$\frac{da_i(t)}{dt} = (\alpha - 1)a_i(t) + \beta S_i(t)$$ (3) where $C$ is the capacitance parameter, $g$ is the conductance value, $V_i(t)$ is the membrane potential of neuron $i$ at timing $t$, $S_i(t)$ is the firing flag, $V_1$ is the resting potential, $V_2$ is the reset membrane potentials, $V_{th}$ is the firing threshold. $n_l$ is the number of neurons in the $l$-th hidden layer. $W_{i,j}^{(l)}$ is the synaptic weight from the neuron $i$ to the neuron $j$ in the $l$-th layer. The dynamic threshold $a_i(t)$ is accumulated during the period from the resetting to the membrane potential firing, and as the frequency of firing increases, the threshold also increases, and vice versa. 4.2 Neuronal Diversity Module In Section 3, we have explained the significance of different types of neurons and synaptic connections in achieving effective multisensory interaction and emotion recognition. Drawing inspiration from this, our model incorporates diverse neurons and special weight constraints to facilitate comprehensive interaction among multisensory information. To extract this information, we propose the Neuronal Diversity Module, as illustrated in Figure 2. This subsection details the process of determining diverse neurons and establishing two connection constraints based on them. These constraints aim to facilitate multisensory interaction by enforcing specific weight configurations. To identify diverse neurons and establish connection constraints, additional unisensory emotion recognition networks \((Re^{(v)}, Re^{(a)})\) are designed, as shown in the gray rectangular area of Figure 1. These networks consist of two fully connected layers and one readout layer, serving to classify the unisensory features \(F^{(v)}\) and \(F^{(a)}\). The ensemble comprising the unisensory processing and recognition network for each single sense can be trained separately. Subsequently, the spiking patterns of neurons from the trained recognition networks \(Re^{(v)}\) and \(Re^{(a)}\) are recorded and utilized by the Neuronal Diversity Module to identify the diverse neurons and establish connection constraints. ![Neuronal Diversity Module Diagram](image) Figure 2: The detailed diagram of the Neuronal Diversity Module. \(M\) represents the multisensory neurons and \(U\) represents the unisensory neurons. 4.2.1 Determining Diverse Neurons To elaborate on our ideas of these modules, we explain how the NDIM correlates with multisensory emotion recognition mechanisms in the human brain. Unisensory information is processed through feature extraction and recognition processes that result in a final classification. This process corresponds to the unisensory information recognition circuit in the human brain, with the visual and auditory processing streams corresponding to the ventral visual and auditory pathways, respectively. The architectures of \(Re^{(v)}, Re^{(a)}\), and the Interaction Module are identical since they mimic higher-order regions. However, the difference lies in the populations of neurons they mimic. Unisensory neurons for vision and multisensory neurons respond to visual stimuli, which corresponds to the neurons that \(Re^{(v)}\) mimics, while \(Re^{(a)}\) involves unisensory neurons for audio and multisensory neurons. Therefore, different activation patterns of neurons can differentiate between unisensory and multisensory neurons in higher-order regions. The Neuronal Diversity Module identifies different types of neurons in a layer-by-layer fashion. In particular, the spiking patterns of neurons in the first layer of \(Re^{(v)}, Re^{(a)}\) are used to identify neurons in the first layer of the Neuronal Diversity Module, and this process is repeated for each subsequent layer. To represent spike patterns, \( S_{l,(v)}^i, S_{l,(a)}^i \in \mathbb{R}^{n_l \times t'} \) denote the spikes of all neurons in the \( l \)-th layer, for each sample of visual and auditory senses. Spike patterns, denoted as \( P_{l,(v)} \) and \( P_{l,(a)} \), are obtained by averaging spikes on a timing scale and samples, which are formulated as: \[ P_{l,(v)} = \frac{1}{n_l} \sum_{i=1}^{n_l} \text{Mean}_t(S_{l,(v)}^i), \quad P_{l,(a)} = \frac{1}{n_l} \sum_{i=1}^{n_l} \text{Mean}_t(S_{l,(a)}^i) \] (4) where \( \text{Mean}_t(\cdot) \) represents the average value on timing scale. Then, the multisensory neurons are determined by, \[ M^l = \text{Top}(P_{l,(v)}, \rho) \cap \text{Top}(P_{l,(a)}, \rho) \] (5) where \( \text{Top}(\cdot, \rho) \) is used to identify neurons with the highest spiking level based on their spike patterns. Here, \( \rho \) is a hyperparameter ranging from 0 to 1, indicating the percentage of neurons with the top \( \rho \) highest spikes that are considered to have the strongest spiking level. Multisensory neurons in the \( l \)-th layer, denoted as \( M^l \), are found by intersecting the visual neurons with the strongest spiking level and the neurons associated with auditory modality. Unisensory neurons, on the other hand, are obtained by taking the difference between all neurons and the multisensory neurons in the \( l \)-th layer, which is identified by, \[ U_{l,(v)} = A_{l,(v)} \setminus M^l, \quad U_{l,(a)} = A_{l,(a)} \setminus M^l \] (6) where the \( U_{l,(v)} \) and \( U_{l,(a)} \) represent unisensory neurons in the \( l \)-th layer, \( A_{l,(v)} \) and \( A_{l,(a)} \) represent all neurons in the \( l \)-th layer. ### 4.2.2 Establishing Weight Constraints Inspired by the response characteristics of different neurons, we establish connection constraints, denoted as matrices \( C^l \in \mathbb{R}^{n_{l-1} \times n_l} \), where \( n_l \) represents the number of neurons in the \( l \)-th layer of the Neuronal Diversity Module. These matrices facilitate the interaction between neurons of different senses. They are filled with 1 or 0, representing connected and disconnected weights, respectively. The function \( D(i,j) \) determines the connection constraint from the \( i \)-th neuron in the \( l - 1 \)th layer (source neuron) to the \( j \)-th neuron in the \( l - 1 \)th layer (target neuron). The function value of \( D(i,j) \) is determined by the following rules. Firstly, if the source neuron is unisensory and the target neuron is either unisensory of the same modality or multisensory, the function value is set to 1. Secondly, if the source neuron is multisensory and the target neuron is unisensory, the function value is set to 1. Thirdly, in all other cases, the function value is set to 0. These rules define the masks for both layers. The weight constraints, along with the diverse neurons, are then projected to the Interaction Module for multisensory emotion recognition. ### 4.3 Interaction Module The Interaction Module aims to enhance multisensory emotion recognition by leveraging diverse neurons and connection constraints introduced earlier. This subsection provides a detailed explanation of how the module achieves interaction and emotion recognition based on these diverse neurons and weight constraints. Diverse neurons and weight constraints are determined and established to guide the Interaction Module for further recognition. The diverse neurons facilitate the identification of multisensory and unisensory neurons within the module. Meanwhile, the weight constraints restrict the connections between neurons through the Hadamard product with \( C^l \). This approach aligns with the response characteristics of neuronal diversity, where multisensory neurons in higher-order regions respond to stimuli from multiple senses, while unisensory neurons only respond to stimuli from the same sense. Through these neurons and constraints, interactions between different senses are achieved. Specifically, the auditory features \( F_s^{(a)} \) are connected to the multisensory neurons of the first layer \( M^1 \), which, connect to the visual neurons in the second layer \( U^{2,(v)} \). This configuration enables a unidirectional projection from the auditory sense to the visual sense, as well as vice versa. Consequently, the interaction between the visual and auditory senses is achieved. 5 EXPERIMENTS 5.1 EXPERIMENTS SETUP 5.1.1 DATASETS. Two datasets were used to evaluate our model. The first dataset is the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) (Martin et al., 2006). It includes recordings from 24 participants, with a balanced gender distribution. The participants read a sentence in eight different emotional states: neutral, calm, happy, sad, angry, fearful, disgust, and surprised. For this study, we focused solely on the speech and video modalities within this dataset. The second dataset, eNTERFACE’05 (Livingstone & Russo, 2018), consists of recordings from 42 participants. The participants comprise 81% male and 19% female. The audio recordings have a sampling rate of 48000Hz in 16-bit format, while the videos have a frame rate of 25 frames per second. Each participant expressed six different emotions: anger, disgust, fear, happiness, sadness, and surprise. 5.1.2 FEATURE EXTRACTION. The primary features of each sense, denoted as \( F_p^{(v)} \) and \( F_p^{(a)} \), are obtained by feature extraction. For the visual modality, 15 frames are extracted at equal intervals from each video. Facial contours are then extracted from these frames and downscaled to a size of 28x28 pixels, serving as visual features for each frame. Consequently, the final dimension of the visual feature \( F_p^{(v)} \) for each video is \( R^{15*784} \). Regarding the auditory modality, we extract Mel-scale Frequency Cepstral Coefficients (MFCC) as auditory features \( F_p^{(a)} \) for each speech sample. MFCC is a widely used feature in speech recognition. The average number of frames across all speech samples is 280, with a feature dimension of 12 per frame. Therefore, the final dimension of the auditory feature is \( R^{12*280} \). 5.2 OVERALL PERFORMANCE We focus on evaluating the performance of the NDIM modal in multisensory emotion recognition by conducting a series of experiments on two datasets. We compare our proposed model with state-of-the-art techniques and analyze its interpretability. Tables 1 and 2 show the performance comparison between the NDIM modal and the state-of-the-art baselines on the two datasets. Table 1: Comparison of accuracy for multisensory emotion recognition on RAVDESS dataset. * represents that the performance of this model is obtained from the relevant paper. | Model | Neutral | Calm | Happy | Sad | Angry | Fearful | Disgust | Surprised | Acc | |-------------|---------|--------|-------|-------|-------|---------|---------|-----------|---------| | Convergence*| - | - | - | - | - | - | - | - | 0.8130 | | Enhancement*| - | - | - | - | - | - | - | - | 0.7330 | | Synch-Graph*| 1.0000 | 1.0000 | 1.0000| 0.9550| 0.9310| 0.9290 | 0.9830 | | MR-SNN | 0.9655 | 0.9825 | 1.0000| 0.9355| 0.9310| 0.9091 | 0.9474 | 0.9655 | 0.9537 | | MulT* | - | - | - | - | - | - | - | - | 0.7416 | | NDIM | 1.0000 | 1.0000 | 1.0000| 0.9933| 1.0000| 0.9871 | 1.0000 | 0.9931 | 0.9963 | Our model achieves the best performance on both datasets, with an accuracy of 99.63% on the RAVDESS dataset and 98.45% on the eNTERFACE’05 dataset. Notably, these outcomes outperform the state-of-the-art multisensory emotion recognition method employed on the same datasets. Among the baselines, the Synch-Graph model performs best on the two datasets, achieving accuracies of 98.30% and 96.82% respectively. This model learns synchrony patterns between audio and visual neuron groups. In comparison, our model recognizes concatenated features, drawing inspiration from neuronal diversity. Interaction is achieved through special connections between Table 2: Comparison of accuracy for multisensory emotion recognition on eNTERFACE’05 dataset. * represents that the performance of this model is obtained from the relevant paper. | Model | Angry | Disgust | Fear | Happy | Sad | Surprised | Acc | |----------------|-------|---------|------|-------|-----|-----------|-------| | Convergence* | - | - | - | - | - | - | 0.8330| | Enhancement* | - | - | - | - | - | - | 0.8330| | Synch-Graph* | 0.9470| 0.9550 | 1.0000| 1.0000| 0.9200| 1.0000 | 0.9682| | MR-SNN | 0.9231| 0.9180 | 0.9032| 0.9118| 0.9231| 0.8955 | 0.9124| | NDIM | **0.9524**| **1.0000**| **1.0000**| **1.0000**| **0.9524**| **1.0000**| **0.9845**| unisensory neurons and multisensory neurons. The Neuronal Diversity Module determines neuron diversity and establishes connection constraints, enabling the Interaction Module to interact and recognize multisensory information. Owing to these advantages, our model achieves an accuracy of 99.63% with eight classes on the RAVDESS dataset, while the Synch-Graph achieves 98.30% accuracy with six classes. The two classes not considered by the Synch-Graph model are neutral and calm. In these two classes, the ND-MRM model achieves 100% accuracy. Furthermore, our model outperforms the Synch-Graph by 1.63% on the eNTERFACE’05 dataset. Compared with other brain-inspired methods such as the Convergence, Enhancement, and M-SNN, our model outperforms them by 18.33%, 26.33%, 4.26% on the RAVDESS dataset and 15.15%, 15.15%, 7.21% on the eNTERFACE’05 dataset respectively. In addition to these brain-inspired methods, we also compare the performance of the MulT model with our model on the RAVDESS dataset. The accuracy of the MulT model on seven classes is 74.16%, while our model outperforms MulT by 25.47% (Chumachenko et al., 2022). In summary, our model demonstrates superior performance on both datasets for the task of multisensory emotion recognition. The NDIM model captures the interaction between different senses using diverse neurons and special connection constraints, outperforming the other brain-inspired methods and the MulT model’s pairwise cross-modal attention approach. 5.3 Ablation Study In this subsection, we conduct two ablation experiments to further investigate the impact of neuronal diversity on multisensory emotion recognition. Firstly, we examine the influence of neuronal type on emotion recognition in the hidden layers of the Interaction Module. We compare models with either all unisensory neurons or all multisensory neurons to understand whether a model with multiple types of neurons working together outperforms a model with a single type of neurons when the total number of neurons remains constant. Secondly, we study the effect of varying the hyperparameter $\rho$, which determines the number of neurons, on multisensory emotion recognition. Figure 3: Accuracy comparison of the ablation experiments on neuronal diversity. From left to right: (a) Accuracy comparison of neuronal types on the RAVDESS dataset. (b) Accuracy comparison about neuronal types on the eNTERFACE’05 dataset. (c) Accuracy comparison of neuronal numbers on the two datasets. 5.3.1 Ablation Study on Neuronal Types Both multisensory neurons and unisensory neurons are components of the NDIM model, which aims to achieve cross-sensory interaction. Therefore, this ablation experiment investigates the performance of models that contain either only multisensory neurons or only unisensory neurons in the two hidden layers. The performance comparison is shown in Figure 3(a)(b). Compared with the other two types of models, the model containing only unisensory neurons exhibits the worst performance, achieving 91.94% on the RAVDESS dataset and 95.97% on the eINTERFACE’05 datasets. This is evident because unisensory neurons solely respond to a single sensory stimulus, preventing the model from achieving cross-modal interaction and resulting in its inferior performance. Additionally, the model containing only multisensory neurons surpasses those with unisensory neurons but is still outperformed by the model with multiple types of neurons. Unlike the NDIM, which achieves regular interaction of multisensory information through neuron diversity and connection constraints, the hidden layers in the model that contain only multisensory neurons are fully connected. This distinction accounts for our model’s superiority. 5.3.2 Ablation Study on the Number of Neurons As mentioned in Section 4, the interaction between different senses is facilitated by distinct types of neurons and their synaptic connections, which are determined by the hyperparameter $\rho$. Figure 3(c) presents the values of three parameters: $\rho$, the proportion $(p)$ of multisensory neurons among all neurons, and the weighted accuracy on the RAVDESS and eINTERFACE’05 datasets. As shown in the figure, as the hyperparameter $\rho$ approaches 0, the number of multisensory neurons decreases. When $\rho$ is 0.1, the number of multisensory neurons becomes almost zero, leading to a near disappearance of the interaction between multiple senses. In this condition, the model achieves a weighted accuracy of 93.61% and 91.25% on the two datasets, respectively, which is significantly lower than the accuracy achieved in other conditions. Moreover, when $\rho$ is set to 0.9, the number of multisensory neurons accounts for over 67% of the total number of neurons. The performance in this condition is lower than that in conditions where the number of multisensory neurons accounts for approximately 33%. We attribute this to an imbalance between the number of unisensory neurons for auditory and visual senses and the number of multisensory neurons. When $\rho$ is 0.9, the model overly emphasizes the interaction between different senses while neglecting the extraction of higher-level features from the individual senses. Therefore, when $\rho$ is 0.7, the number of multisensory neurons accounts for approximately 33%, and the number of different types of neurons reaches a relative equilibrium, resulting in the best performance of our model. 6 Conclusion This study aims to develop a novel multisensory emotion recognition model called NDIM, drawing inspiration from the neuronal diversity observed in the human brain. By incorporating both unisensory and multisensory neurons, our model effectively captures cross-sensory interactions, facilitating multisensory emotion recognition. Additionally, we implement special connection constraints to regulate feature transmission, which aligns with the distinctive response characteristics exhibited by diverse neurons. The performance of the NDIM on the RAVDESS and eINTERFACE’05 datasets is 99.63% and 98.45%, respectively, revealing its superiority over alternative brain-inspired approaches. For future research, we plan to extend the application of our method to multiple other sensory modalities, exploring its effectiveness and adaptability in various tasks. Additionally, we aim to explore the development of an end-to-end model that integrates all stages of information processing within a unified framework. References Sharmeen M Saleem Abdullah Abdullah, Siddeeq Y Ameen Ameen, Mohammed AM Sadeeq, and Subhi Zeebaree. Multimodal emotion recognition using deep learning. Journal of Applied Science and Technology Trends, 2(02):52–58, 2021. Brian L Allman, Leslie P Keniston, and M Alex Meredith. Not just for bimodal neurons anymore: the contribution of unimodal neurons to cortical multisensory processing. *Brain topography*, 21:157–167, 2009. Juan Carlos Alvarado, J William Vaughan, Terrence R Stanford, and Barry E Stein. Multisensory versus unisensory integration: contrasting modes in the superior colliculus. *Journal of neurophysiology*, 97(5):3193–3205, 2007. Esma Mansouri Benssassi and Juan Ye. Investigating multisensory integration in emotion recognition through bio-inspired computational models. *IEEE Transactions on Affective Computing*, 14(2):906–918, 2023. Fran?ois P Chabrol, Alexander Arenz, Martin T Wiechert, Troy W Margrie, and David A Digregorio. Synaptic diversity enables temporal coding of coincident multisensory inputs in single neurons. *Nature Neuroscience*, 18(5):718, 2015. Kateryna Chumachenko, Alexandros Iosifidis, and Moncef Gabbouj. Self-attention fusion for audiovisual emotion recognition with incomplete data. In *2022 26th International Conference on Pattern Recognition (ICPR)*, pp. 2822–2828. IEEE, 2022. Cristiano Cuppini, Elisa Magosso, and Mauro Ursino. Organization, maturation, and plasticity of multisensory integration: insights from computational modeling studies. *Frontiers in psychology*, 2:77, 2011a. Cristiano Cuppini, Barry E. Stein, Benjamin A. Rowland, Elisa Magosso, and Mauro Ursino. A computational study of multisensory maturation in the superior colliculus (sc). *Experimental Brain Research*, 213(2-3):341–349, 2011b. Andreas K Engel, Daniel Senkowski, and Till R Schneider. Multisensory integration through neural coherence. 2012. Christopher R Fetsch, Gregory C DeAngelis, and Dora E Angelaki. Bridging the gap between theories of sensory cue integration and the physiology of multisensory neurons. *Nature Reviews Neuroscience*, 14(6):429–442, 2013. Nicholas P Holmes. The law of inverse effectiveness in neurons and behaviour: multisensory integration versus normal variability. *Neuropsychologia*, 45(14):3340–3345, 2007. Ruo-Hong Huan, Jia Shu, Sheng-Lin Bao, Rong-Hua Liang, Peng Chen, and Kai-Kai Chi. Video multimodal emotion recognition based on bi-gru and attention fusion. *Multimedia Tools and Applications*, 80:8213–8240, 2021. Sarah Jessen and Sonja A Kotz. On the role of crossmodal prediction in audiovisual emotion perception. *Frontiers in Human Neuroscience*, 7:369, 2013. Shuncheng Jia, Tielin Zhang, Xiang Cheng, Hongxing Liu, and Bo Xu. Neuronal-plasticity and reward-propagation improved recurrent spiking neural networks. *Frontiers in Neuroscience*, 15:654786, 2021. Shuncheng Jia, Ruichen Zuo, Tielin Zhang, Hongxing Liu, and Bo Xu. Motif-topology and reward-learning improved spiking neural network for efficient multi-sensory integration. In *ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pp. 8917–8921, 2022. Paul J Laurienti, Thomas J Perrault, Terrence R Stanford, Mark T Wallace, and Barry E Stein. On the use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging studies. *Experimental Brain Research*, 166:289–297, 2005. Wei Liu, Jie-Lin Qiu, Wei-Long Zheng, and Bao-Liang Lu. Multimodal emotion recognition using deep canonical correlation analysis. *arXiv preprint arXiv:1908.05349*, 2019. Steven R. Livingstone and Frank A. Russo. The ryerson audio-visual database of emotional speech and song (ravdess): A dynamic, multimodal set of facial and vocal expressions in north american english. *PLOS ONE*, 13(5):1–35, 05 2018.
WroPkTLiAJ
The authors call their method 'personalized'. In personalized FL the inference is made through individual models of the clients. But, as far as I understand, the output of your methodology is a single global model that would hopefully work for every client; although you obtain local posteriors they are just intermediators for the global model training. If this is the case, this is not a personalized method but a heterogeneous FL method. Otherwise, please clarify.
FedLPA: Personalized One-shot Federated Learning with Layer-wise Posterior Aggregation Anonymous authors Paper under double-blind review Abstract Efficiently aggregating trained neural networks from local clients into a global model on a server is a widely researched topic in federated learning. Recently, motivated by diminishing privacy concerns, mitigating potential attacks, and reducing the overhead of communication, one-shot federated learning (i.e., limiting client-server communication into a single round) has gained popularity among researchers. However, the one-shot aggregation performances are sensitively affected by the non-identical training data distribution, which exhibits high statistical heterogeneity in some real-world scenarios. To address this issue, we propose a novel one-shot aggregation method with Layer-wise Posterior Aggregation, named FedLPA. FedLPA aggregates local models to obtain a more accurate global model without requiring extra auxiliary datasets or exposing any confidential local information, e.g., label distributions. To effectively capture the statistics maintained in the biased local datasets in the practical non-IID scenario, we efficiently infer the posteriors of each layer in each local model using layer-wise Laplace approximation and aggregate them to train the global parameters. Extensive experimental results demonstrate that FedLPA significantly improves learning performance over state-of-the-art methods across several metrics. 1 Introduction The significance of data privacy in Deep Learning [LeCun et al., 2015; Schmidhuber, 2015; Zhang et al., 2018; Krizhevsky et al., 2017; Amodei et al., 2016; Pouyanfar et al., 2018b] has surged to the forefront as a major global concern [Yang et al., 2019]. With the primary objectives of safeguarding data privacy and curbing the aggregation and management of data across institutions, the distribution of data exhibits variations among clients [Yang et al., 2019]. In the burgeoning domain of machine learning, federated learning (FL), as denoted by references [McMahan et al., 2016; Kairouz et al., 2021; Li et al., 2021], has emerged as a prominent paradigm. The fundamental tenet of federated learning revolves around the sharing of machine learning models derived from decentralized data repositories, as opposed to divulging the raw data itself. This approach effectively preserves the confidentiality of individual data. The standard federated learning framework, Fedavg [McMahan et al., 2016, 2017], involves local model training and aggregating these local models into a global one through parameter averaging. However, the majority of current FL algorithms like Fedavg necessitate numerous communication rounds to effectively train a global model. This leads to substantial communication overhead, heightened privacy concerns, and heightened demands for fault tolerance throughout the rounds. One-shot FL, which limits client-server communication into one round as explored in previous works [Guha et al., 2019; Li et al., 2020a; Diao et al., 2023], has emerged as a promising yet challenging scheme to address these issues. It proves particularly practical in such scenarios where iterative communication is not feasible. Additionally, a reduction in communication rounds translates to fewer opportunities for any potential eavesdropping attacks. While one-shot FL shows promise, existing methods often grapple with several challenges such as non-independent and non-identically distributed (non-IID) data [Zhou et al., 2020; Zhang et al., 2021], or inadequate handling of high statistical heterogeneity information in the previous works. Moreover, some methods rely on an auxiliary public dataset to achieve satisfactory performance in one-shot FL (Guha et al., 2019; Li et al., 2020a), or even on pre-trained large models (Yang et al., 2023), which may not be practical (Zhu et al., 2021) in some sensitive scenarios. Additionally, certain approaches (Shin et al., 2020; Zhang et al., 2021; Heinbaugh et al., 2022; Diao et al., 2023) might expose data/label privacy to the local and global models, e.g., the client label distribution, potentially violating General Data Protection Regulation (GDPR) rules. Furthermore, some of these methods (Li et al., 2020a; Zhou et al., 2020; Zhang et al., 2022a) may require substantial computing resources for dataset distillation, model distillation, or even training a generator capable of generating synthetic data for second-stage training on the server side. On the other hand, the one-shot FL performance always falls short when dealing with non-IID data. Non-IID data biases global updates, reducing the accuracy of the global model and slowing down convergence. In extreme non-IID cases, clients may be required to address distinct classes. To tackle this heterogeneity among clients, personalized federated learning (PFL) (Smith et al., 2017) in multi-round settings becomes essential, allowing each client to use a personalized model instead of a shared global model. With the personalized approach, the multi-round framework benefits from joint training while allowing each client to keep its own unique model. However, one-shot aggregation on the local model is far from being resolved under the personalized setting. In this paper, we introduce a novel one-shot aggregation approach to address these issues, named FedLPA, i.e., federated learning with Layer-wise Posterior Aggregation. FedLPA infers the posteriors of each layer in each local model using the empirical Fisher information matrix obtained by Layer-wise Laplace Approximation. Laplace Approximations are widely used to compute the empirical Fisher information matrix for the neural networks, conveying the data statistics in personalized settings. However, computing empirical Fisher information matrices of multiple personalized local clients and aggregating their Fisher information matrices remains an ongoing challenge (Liu et al., 2021). FedLPA aggregates the posteriors of local models using the accurately computed block-diagonal empirical Fisher information matrices as a metric of the parameter space. This matrix captures essential parameter correlations and distinguishes itself from prior methods by being non-diagonal and non-low-rank, thereby conveying the statistics of biased local datasets. In our approach, we directly train global model parameters after the aggregation without any need for server-side knowledge distillation (Lin et al., 2020). Our experiments verify the efficiency and effectiveness of FedLPA, highlighting that FedLPA markedly enhances the test accuracy when compared to existing one-shot FL approaches across various datasets. Our main contributions are summarized as follows: • To the best of our knowledge, we are the first to propose a personalized one-shot federated learning approach that directly trains the global models using the block-diagonal empirical Fisher information matrices. Our approach is data-free without the need for any auxiliary information and significantly enhances the system performance, including negligible communication cost and moderate computing overhead. • We are the first to train the global model parameters via constructing a multi-variate linear objective function and using its quadratic form, which allows us to formulate and solve this problem in a convex form. Nevertheless, from the theoretical analysis, we show that FedLPA has a linear convergence rate, ensuring good performance. • We conduct extensive experiments to illustrate the effectiveness of FedLPA. Our approach consistently outperforms the baselines, showcasing substantial enhancements across various settings and datasets. Even in some extreme scenarios with severe label skew, e.g., where each client has only one class, in which many federated learning algorithms struggle, we achieve satisfactory results. 2 BACKGROUND AND RELATED WORKS 2.1 FEDERATED LEARNING Previous work Fedavg (McMahan et al., 2016) first introduced the concept of FL and presented the algorithm, which achieved competitive performance on i.i.d data, in comparison to several centralized techniques. However, it was observed in previous works (Li et al., 2019; Zhao et al., 2018) that the convergence rate and ultimate accuracy of FedAvg on non-IID data distributions were significantly reduced, compared to the results observed with homogeneous data distributions. Other methods have been developed to enhance performance in federated learning. The SCAFFOLD method (Karimireddy et al., 2020) leveraged control variates to reduce objective inconsistency in local updates. It estimated the drift of directions in local optimization and global optimization and incorporated this drift into local training to align the local optimization direction with the global optimization. Fednova (Wang et al., 2020b) addressed objective inconsistency while maintaining rapid error convergence through a normalized averaging method. It scaled and normalized the local updates of each client based on the number of local optimization steps. Fedprox (Li et al., 2020c) enhanced the local training process by introducing a global prior in the form of an $L_2$ regularization term within the local objective function. In Yurochkin et al. (2019); Wang et al. (2020a), researchers introduced PFNM, a Bayesian probabilistic framework specifically tailored for multilayer perceptrons. PFNM employed a Beta-Bernoulli process (BBP) (Thibaux & Jordan, 2007) to aggregate local models, quantifying the degree of alignment between global and local parameters. The framework proposed in Liu et al. (2021) utilized a multivariate Gaussian product method to construct a global posterior by aggregating local posteriors estimated using an online Laplace approximation. FedPA (Al-Shedivat et al., 2020) also applied the Gaussian product method but employed stochastic gradient Markov chain Monte Carlo for approximate inference of local posteriors. DAFL (Data-Free Learning) (Chen et al., 2019) introduced an innovative framework based on generative adversarial networks. ADI (Yin et al., 2020) utilized an image synthesis method that leveraged the image distribution to train deep neural networks without real data. The pFedHN method (Shamsian et al., 2021) incorporated HyperNetworks (Krueger et al., 2017) to address federated learning applications. However, all of these methods encountered challenges in the personalized one-shot federated learning setting, as they required aggregating the model by multiple rounds and might be inaccurate due to the omission of critical information, such as posterior joint probabilities between different parameters. ### 2.2 One-shot Federated Learning One-shot Federated Learning (FL) is an emerging and promising research direction characterized by its minimal communication cost. In the first study on one-shot FL (Guha et al., 2019), the approach involved on the aggregation of local models, forming an ensemble to construct the final global model. Subsequently, knowledge distillation using public data was applied in the following step. FedKT (Li et al., 2020a) brought forward the concept of consistent voting to fortify the ensemble. Recent research endeavors (Zhang et al., 2021; 2022a) proposed data-free knowledge distillation schemes tailored for one-shot FL. These methods adopted the basic ensemble distillation framework as FedDF (Lin et al., 2020). XorMixFL (Shin et al., 2020) introduced the use of exclusive OR operation (XOR) for encoding and decoding samples in data sharing. It is important to note that XorMixFL assumed the possession of labeled samples from a global class by all clients and the server, which might not align with practical real-world scenarios. A noteworthy innovation of DENSE (Zhang et al., 2022a) was its utilization of a generator to create synthetic datasets on the server side, circumventing the need for a public dataset in the distillation process. FedOV (Diao et al., 2023) delved into addressing comprehensive label skew cases. FEDCVAE (Heinbaugh et al., 2022) confronted this challenge by transmitting all label distributions from clients to servers. These schemes (Shin et al., 2020; Li et al., 2020a; Zhang et al., 2021; 2022a; Heinbaugh et al., 2022; Diao et al., 2023) exposed some client-side private information, leading to additional communication overhead and potential privacy leakage, e.g., FEDCVAE (Heinbaugh et al., 2022) needed all the client label distribution to be transmitted to the server side and FedOV (Diao et al., 2023) needed the clients to know the labels which were unknown. Instead, MA-Echo (Su et al., 2023) adopted a unique approach by emphasizing the addition of norms among layer-wide parameters during the aggregation of local models. FedFisher (Jhunjhunwala et al., 2023) also leveraged empirical Fisher information matrix, but focused on theoretic analysis of the error on its approximation method. However, their method grappled with limited experiments and lacked detailed explanations of the approach. FedDISC (Yang et al., 2023), on the other hand, relied on the pre-trained model CLIP from OpenAI, where their reliance might not always align with practicality or suitability for diverse scenarios. While some of these techniques are orthogonal to FedLPA and can be integrated with it, it is worth noting that none of the previously mentioned algorithms possess the capability to train global model parameters using empirical Fisher information matrices. Some of them may require additional information, entail the risk of breaching data/label privacy. 3 METHODOLOGY 3.1 OBJECTIVE FORMULATION Generally, federated learning is defined as an optimization problem (Li et al., 2020b,c; Karimireddy et al., 2020; Wang et al., 2020b) for maximizing a global objective function \( F(\theta) \) which is a mixture of local objective functions \( F_k(\theta, D_k) \): \[ F(\theta) = \sum_{k=1}^{K} F_k(\theta, D_k), \] where \( \theta = [\text{vec}(W_1), \ldots, \text{vec}(W_L), \ldots, \text{vec}(W_L)] \) is the parameter vector of global model and \( W_l \) is the weight and bias of layer \( l \) for a \( L \)-layers neural network; \( D_k \) is the local dataset \( k \)-th client. \( F_k(\theta, D_k) \) is the expectation of the local objective function, which is proportional to the logarithm of likelihood \( \log p(D_k|\theta) \). Previous works (Liu et al., 2021; Al-Shedivat et al., 2020) give a common formula of the global posterior which consists of local posteriors \( p(\theta|D_k) \) under variational inference formulation: \[ p(\theta|D) \propto \prod_{k=1}^{K} p(D_k|\theta) \propto \prod_{k=1}^{K} p(\theta|D_k). \] \[ \max_{\theta} F(\theta) = \sum_{k=1}^{K} \frac{|D_k|}{|D|} \cdot \mathbb{E}_{s \in D_k} [\log p(s|\theta)] \equiv \max_{\theta} \prod_{k=1}^{K} p(\theta|D_k). \] As we know, the objective function is the expectation of the likelihood, and the sum of the logarithms is equal to the logarithms of the product as Eq. 3. Therefore, globally variational inference using Eq. 2 is equivalent to optimization for Eq. 1. Correspondingly, we get: \[ \max_{\theta} F_k(\theta, D_k) \equiv \max_{\theta} p(\theta|D_k). \] Following the same training pattern of federated learning, each client infers the local posterior \( p(\theta|D_k) \) by using the local dataset \( D_k \), and then uploads probability parameters to the server. As a result, the server obtains the global posterior \( p(\theta|D) \) by aggregating local posteriors using Eq. 2. However, both the global and local posterior are usually intractable because modern neural networks are usually non-linear and have a large number of parameters. Therefore, it is necessary to design an efficient and accurate aggregation method for one-shot federated learning. 3.2 APPROXIMATING POSTERIORS Although the posterior is usually intractable, the posterior can be approximated as a Gaussian distribution by performing a Taylor expansion on the logarithm of the posterior (Ritter et al., 2018): \[ \log p(\theta|D) \approx \log p(\theta^*|D) - \frac{1}{2} (\theta - \theta^*)^\top H (\theta - \theta^*), \] where \( \theta^* \) is the optimal parameter vector, \( H = \mathbb{E}_{s \in D}[H] \) is the average Hessian of the negative log posterior over a dataset \( D \). It is reasonable to approximate global and local posteriors as multivariate Gaussian distributions with expectations \( \mu = \theta^* \) and \( \mu_k = \theta_k^* \); co-variances \( \Sigma = H^{-1} \) and \( \Sigma_k = H_k^{-1} \) (Daxberger et al., 2021). The details are discussed in Appendix B. \[ p(\theta|D) \equiv \theta \sim N(\mu, \Sigma), \quad p(\theta|D_k) \equiv \theta \sim N(\mu_k, \Sigma_k). \] As a result, if given local expectation $\mu_k$ and local co-variance $\Sigma_k$, the global posterior is determined by Eq. 2 as below: $$\bar{\mu} = \sum_{k=1}^{K} \Sigma_k^{-1} \mu_k, \quad \bar{\Sigma}^{-1} = \sum_{k=1}^{K} \Sigma_k^{-1}. \tag{7}$$ Modern algorithms (Rumelhart et al., 1986; Martens & Grosse, 2015a) allow the local training process to obtain an optimal, regarded as the expectation $\mu_k$ in the above equations. However, $H_k$ is intractable to compute due to a large number of parameters in modern neural networks. An efficient method is to approximate $H_k$ using the empirical Fisher information matrix (Van Loan, 2000). ### 3.3 Inferring the Local Layer-Wise Posteriors with the Block-Diagonal Empirical Fisher Information Matrices A empirical Fisher $\tilde{F}$ is defined as below: $$\tilde{F} = \sum_{s \in D} [\nabla \log p(s|\theta) \nabla \log p(s|\theta)^T], \tag{8}$$ where $p(s|\theta)$ is the likelihood on data point $s$. It is an approximate of the Fisher information matrix, the empirical Fisher information matrix is equivalent to the expectation of the Hessian of the negative log posterior if assuming $p(s|\theta)$ is identical for each $s \in D$. Therefore, the local co-variance $\Sigma_k$ can be approximated by the empirical Fisher $\tilde{F}_k$ (Martens & Grosse, 2015b; Grosse & Martens, 2016). The details is discussed in Appendix C. $$\Sigma_k^{-1} \approx \tilde{F}_k + \lambda I \tag{9}$$ (Kirkpatrick et al., 2017; Liu et al., 2021) ignore co-relations between different parameters and only consider the self-relations of parameters as computing all co-relations is impossible, which are inaccurate. In order to capture co-relations between different parameters efficiently, previous works (Martens & Grosse, 2015a; Ritter et al., 2018) estimate a block empirical Fisher information matrix $F$ instead of assuming parameters are independent and approximating the co-variance by the diagonal of the empirical Fisher. As pointed out in Martens & Grosse (2015a); Benzing (2022); Zhang et al. (2022), co-relations inner a layer are much more significant than others, while computing the co-relations between different layers brings slight improvement but much more computation. Therefore, assuming parameters are layer-independent is a good trade-off. As a result, the approximated layer-wise empirical Fisher is block-diagonal, while the details are shown in Appendix D. For layer $l$ on client $k$, its empirical Fisher $F_{kl}$ is one of the diagonal blocks in the whole empirical Fisher for the local model and is factored into two small matrices as below, $$\Sigma_{kl}^{-1} \approx F_{kl} = A_{kl} \otimes B_{kl}, \tag{10}$$ where $\otimes$ is the Kronecker product; $A_{kl} = \hat{a}_{kl-1} \hat{a}_{kl-1}^T + \pi_l \sqrt{\lambda} I$ and $B_{kl} = \hat{b}_{kl} \hat{b}_{kl}^T + \frac{1}{\pi_l} \sqrt{\lambda} I$ are two factor matrices; $\hat{a}_{kl}$ is the activations and $\hat{b}_{kl}$ is the linear pre-activations of layer $l$ on client $k$, $\lambda$ is the hyperparameter and $\pi_l$ is a factor minimizing approximation error in $F_{kl}$ (Martens & Grosse, 2015a; Grosse & Martens, 2016; Botev et al., 2017). $A_{kl}$ and $B_{kl}$ are symmetric positive definite matrices (Rumelhart et al., 1986; Martens & Grosse, 2015a). The details are discussed in Appendix D. We use $\theta_{kl}$ to denote the parameter vector of layer $l$ and $\mu_{kl} = \text{vec}(W_{kl}^*)$ is the vectorized optimal weight matrix of layer $l$ on client $k$. Thus, the resulting local layer-wise posterior approximation is $\theta_{kl} \sim N(\mu_{kl}, F_{kl}^{-1})$. 3.4 Estimating the Global Expectation Given the local posteriors, the global expectation could be aggregated by Eq. 7. With Eq. 33, the \( l \)-th layer’s global expectation \( \mu_l \) consists of Kronecker products: \[ \bar{\mu}_l = \sum_k K \sum_{k_l}^{-1} \mu_{k_l} = \sum_k K (\mathbf{A}_{k_l} \otimes \mathbf{B}_{k_l}) \mu_{k_l} \] \[ = \sum_k K \text{vec}(\mathbf{B}_{k_l} \mathbf{M}_{k_l} \mathbf{A}_{k_l}) = \sum_k K z_{k_l} = \bar{z}_l, \] where \( \bar{z}_l = \sum_k K z_{k_l} \), and \( z_{k_l} = \text{vec}(\mathbf{B}_{k_l} \mathbf{M}_{k_l} \mathbf{A}_{k_l}) \) is an immediate notations for simplification; \( \mathbf{M}_{k_l} \) is the local expectation matrices that \( \mu_{k_l} = \text{vec}(\mathbf{M}_{k_l}) \). The corresponding global co-variance is an inverse of the sum of Kronecker products: \[ \bar{\Sigma}_l = \left( \sum_k K \mathbf{A}_{k_l} \otimes \mathbf{B}_{k_l} \right)^{-1}. \] As shown in Eq. 11, obtaining the global expectation \( \bar{\mu}_l \) requires calculating the inverse of \( \bar{\Sigma}_l^{-1} \) as Eq. 12 which is unacceptable. Thus, we propose our method to directly train the parameters of the global model on the server side. 3.5 Train the Parameters of Global Model Previous works (Martens & Grosse, 2015a; Grosse & Martens, 2016) approximate the expectation of Kronecker products by a Kronecker product of expectations \( \mathbb{E}[\mathbf{A} \otimes \mathbf{B}] \approx \mathbb{E}[\mathbf{A}] \otimes \mathbb{E}[\mathbf{B}] \) with an assumption of \( \mathbf{A}_{k_l} \) and \( \mathbf{B}_{k_l} \) are independent, which is called Expectation Approximation (EA). However, it may lead to a biased global expectation. The details are discussed in Appendix E. Instead, we could construct a linear objective after aggregating the approximation of local posteriors via using block-diagonal empirical fisher information matrices. We denotes \( \bar{M} \) as the matrix formula of \( \bar{\mu} = \text{vec}(\bar{M}) \), and the optimal solution of \( \bar{\mu} \) is \( \bar{\mu}^* = \text{vec}(\bar{M}^*) \). As we have \( \bar{\mu} = \bar{\Sigma} \cdot \bar{z} \). We construct \( f(\bar{\mu}) \) as a multi-varates linear objective function. When \( \bar{\mu} = \bar{\mu}^* \) is optimal solution, \( f(\bar{\mu}) = o \), where \( o \) is a vector with all zero. Note that \[ f(\bar{\mu}) = \bar{\Sigma}^{-1} \bar{\mu} - \bar{z} = \sum_k K \text{vec}(\mathbf{B}_{k_l} \bar{M} \mathbf{A}_{k_l}) - \bar{z} = \text{vec}(\mathbb{E}[\mathbf{B} \bar{M} \mathbf{A}]) - \bar{z}. \] To obtain the optimal solution, we minimize the following problem to obtain an approximate solution \( \bar{M}^* \) of \( \bar{M} \): \[ \bar{M}^* = \min_{\bar{M}} \frac{1}{2} \left\| \sum_k K \text{vec}(\mathbf{B}_{k_l} \bar{M} \mathbf{A}_{k_l}) - \bar{z} \right\|^2_2. \] The above equation is a quadratic objective, and it can be solved by modern optimization tools efficiently and conveniently. Since the main objective of the above problem is both convex and Lipschitz smooth w.r.t \( \text{vec}(\bar{M}) \), we can use the gradient descent method to solve it with a linear convergence rate (See detailed proof in Appendix F). Here, we use automatic differentiation to calculate the gradient w.r.t. \( \bar{M} \). The overall pseudo Algorithm 1 is in the Appendix A. The transmitted data between the clients and the server is solely \( \mathbf{A}_k, \mathbf{B}_k, \mathbf{M}_k \) without any extra auxiliary information, which preserves the data/label privacy for the local clients. 3.6 t-SNE Observation and Discussions To quickly demonstrate the effectiveness of FedLPA, We show the t-SNE visualization of our FedLPA global model on MNIST dataset with biased training data setting among 10 local clients as an example. The experiment details, t-SNE visualizations of the local models and the global models of other algorithms and discussions are in the Appendix G.2. As shown in Figure 1, FedLPA generates the global model which can distinguish the ten classes, meanwhile, the classes are separate. 4 EXPERIMENTS 4.1 Experiments Settings Datasets. We conduct experiments on MNIST (LeCun et al., 1998), Fashion-MNIST (Xiao et al., 2017), CIFAR-10 (Krizhevsky et al., 2009), and SVHN (Netzer et al., 2011) datasets. We use the data partitioning methods for non-IID settings of the benchmark\(^1\) to simulate different label skews. Specifically, we try two different kinds of partition: 1) \(C = k\): each client only has data from \(k\) classes. We first assign \(k\) random class IDs for each client. Next, we randomly and equally divide samples of each class to their assigned clients; 2) \(p_k \sim \text{Dir}(\beta)\): for each class, we sample from Dirichlet distribution \(p_k \sim \text{Dir}(\beta)\) and distribute \(p_{k,j}\) portion of class \(k\) samples to client \(j\). In this case, smaller \(\beta\) denotes worse skews. Training Details. By default, we follow Fedavg (McMahan et al., 2017) and other existing studies (Wang et al., 2020c; Li et al., 2022; Diao et al., 2023) to use a simple CNN with 5 layers in our experiments. We set the batch size to 64, the learning rate to 0.001, and the \(\lambda = 0.001\) for FedLPA. By default, we set 10 clients and run 200 local epochs for each client. For the various settings of the number of clients and local epochs, we refer to Section 4.3 and Section 4.4. For results with error bars, we run three experiments with different random seeds. All methods were evaluated under fair comparison settings. Due to the page limit, we only present some representative results in the main paper. For more experimental details and results, please refer to Appendix G. Baselines. To ensure fair comparisons, we neglect the comparison with methods that require to download auxiliary models or datasets, such as FedBE (Chen & Chao, 2020), FedKT (Li et al., 2020a) and FedGen (Zhu et al., 2021), or even pretrained large model, like FedDISC (Yang et al., 2023). FedOV (Diao et al., 2023) and FEDCAVE (Heinbaugh et al., 2022) entail sharing more client-side label information or transmitting client label information to the server, which could jeopardize label privacy and are beyond the scope of this study. XorMixFL (Shin et al., 2020) may be not practical as we mentioned before. FedFisher (Jhunjhunwala et al., 2023) is not publicly available. FedDF (Lin et al., 2020), DAFL (Chen et al., 2019) and ADI (Yin et al., 2020) are compared with the state-of-the-art data-free method DENSE (Zhang et al., 2022a). In conclusion, we include one-shot FL algorithms as baselines including Fedavg (McMahan et al., 2017), Fedprox (Li et al., 2020c), Fednova (Wang et al., 2020b), SCAFFOLD (Karimireddy et al., 2020) and the state-of-the-art data-free method DENSE (Zhang et al., 2022a). All the methods are fairly compared, and our implementation is available and the experiment details can be viewed in Appendix G.10. 4.2 An Overall Comparison We compare the accuracy between FedLPA and the other baselines as shown in Table 1, the data in the green shadow shows the best results. FedLPA can achieve the best performance in all the dataset and partition settings. In extreme cases such as \(\beta = \{0.01, 0.05\}, C = 1, C = 2\), FedLPA exhibits a significant performance advantage over the baseline algorithms. This demonstrates our framework’s ability to effectively aggregate valuable information from local clients for global weight training. In summary, the state-of-the-art DENSE could be comparable with FedLPA when the skew level is small. However, with the increment of skewness, FedLPA shows significantly superior results. 4.3 Scalability We assess the scalability of FedLPA by varying the number of clients. In this section, we show results on FMNIST in Table 2. From the table, we can observe that FedLPA still almost always achieves the best accuracy when increasing the number of clients. Notably, there is a slight exception highlighted in red, where DENSE outperforms us when we have 20 clients and \(\beta = 0.5\), this may --- \(^1\)https://github.com/Xtra-Computing/NIID-Bench Table 1: Comparison with various FL algorithms in one round. | Dataset | Partition | FedLPA | Fednova | SCAFFOLD | Fedavg | Fedprox | DENSE | |-------------|-----------|----------|-----------|------------|-----------|-----------|----------| | FNIST | β=0.01 | 21.20±0.07 | 10.00±0.00 | 15.97±0.12 | 18.73±0.05 | 13.37±0.10 | 15.56±0.14 | | | β=0.05 | 54.87±3.88 | 18.67±0.41 | 18.67±0.41 | 22.05±1.14 | 47.77±0.20 | 52.93±0.67 | | | β=0.1 | 53.33±0.06 | 30.47±0.59 | 31.40±0.26 | 30.93±0.58 | 31.00±0.52 | 42.70±0.67 | | | β=0.3 | 68.20±0.04 | 49.40±0.26 | 46.00±0.02 | 45.17±0.05 | 44.30±0.08 | 64.27±0.08 | | | β=0.5 | 73.33±0.06 | 57.03±0.28 | 56.03±0.28 | 59.10±0.63 | 58.10±0.47 | 72.87±0.13 | | | β=1.0 | 76.03±0.05 | 63.63±0.33 | 66.10±0.02 | 62.13±0.43 | 63.10±0.29 | 72.97±0.01 | | | #C=1 | 13.20±1.00 | 10.33±0.00 | 10.37±0.00 | 10.37±0.00 | 10.00±0.00 | 10.00±0.00 | | | #C=2 | 10.15±0.15 | 21.00±0.10 | 23.53±0.22 | 23.20±0.08 | 19.97±0.10 | 38.35±0.45 | | | #C=3 | 57.90±0.06 | 27.47±0.02 | 27.37±0.36 | 29.20±0.03 | 23.93±0.33 | 53.40±0.07 | | CIFAR-10 | β=0.01 | 16.17±0.00 | 11.57±0.02 | 11.47±0.01 | 11.53±0.05 | 10.47±0.00 | 12.30±0.03 | | | β=0.05 | 18.37±0.00 | 10.30±0.00 | 10.73±0.01 | 10.23±0.00 | 10.97±0.02 | 17.87±0.31 | | | β=0.1 | 19.97±0.02 | 12.30±0.04 | 10.87±0.01 | 12.83±0.06 | 11.97±0.04 | 19.93±0.07 | | | β=0.3 | 20.09±0.01 | 11.72±0.03 | 10.93±0.01 | 10.52±0.00 | 11.05±0.09 | 25.49±0.84 | | | β=0.5 | 24.22±0.02 | 11.07±0.00 | 11.77±0.02 | 10.97±0.00 | 9.33±0.00 | 20.17±0.73 | | | β=1.0 | 29.33±0.00 | 12.00±0.00 | 13.00±0.00 | 13.23±0.00 | 13.63±0.01 | 28.23±0.34 | | | #C=1 | 10.70±0.01 | 10.50±0.00 | 10.27±0.00 | 10.23±0.00 | 10.37±0.01 | 10.00±0.00 | | | #C=2 | 16.40±0.00 | 10.07±0.00 | 12.03±0.08 | 10.07±0.00 | 10.03±0.00 | 14.13±0.22 | | | #C=3 | 11.10±0.01 | 11.10±0.01 | 11.56±0.01 | 11.56±0.01 | 11.56±0.01 | 14.15±0.11 | | MNIST | β=0.01 | 73.17±1.16 | 13.53±0.02 | 8.68±0.01 | 9.37±0.00 | 9.33±0.00 | 15.80±0.24 | | | β=0.05 | 70.07±0.05 | 31.60±0.71 | 41.07±0.46 | 38.57±0.28 | 32.23±0.18 | 57.83±1.55 | | | β=0.1 | 77.43±0.14 | 48.07±0.28 | 47.73±0.22 | 48.63±0.15 | 47.40±0.00 | 70.33±0.02 | | | β=0.3 | 85.77±0.02 | 67.67±0.40 | 67.07±0.15 | 66.17±0.21 | 63.40±0.41 | 84.50±0.01 | | | β=0.5 | 88.73±0.07 | 79.27±0.08 | 78.57±0.29 | 77.57±0.07 | 79.60±0.24 | 86.33±0.36 | | | β=1.0 | 90.83±0.08 | 84.01±0.02 | 83.18±0.09 | 83.16±0.13 | 83.16±0.10 | 93.11±0.02 | | | #C=1 | 11.43±0.01 | 10.27±0.02 | 10.10±0.01 | 10.10±0.01 | 10.13±0.01 | 9.93±0.00 | | | #C=2 | 69.63±0.29 | 20.90±0.49 | 25.23±1.08 | 16.47±0.23 | 14.30±0.34 | 52.73±0.46 | | | #C=3 | 77.13±0.24 | 25.33±1.65 | 25.33±1.65 | 23.34±1.65 | 23.34±1.65 | 29.10±0.31 | | SVHN | β=0.01 | 19.20±0.00 | 13.73±0.14 | 9.83±0.00 | 12.13±0.04 | 11.43±0.12 | 17.33±0.28 | | | β=0.05 | 22.93±0.38 | 14.80±0.43 | 14.80±0.43 | 15.90±0.14 | 15.90±0.12 | 21.47±0.20 | | | β=0.1 | 23.10±0.29 | 23.97±0.13 | 25.70±0.65 | 22.17±1.02 | 16.27±0.00 | 19.49±0.45 | | | β=0.3 | 52.23±0.26 | 34.40±0.28 | 34.03±0.06 | 33.93±0.26 | 34.70±0.20 | 47.13±7.14 | | | β=0.5 | 54.27±0.02 | 38.53±0.07 | 40.07±0.13 | 38.53±0.15 | 36.93±0.09 | 53.70±0.07 | | | β=1.0 | 67.80±0.01 | 55.60±0.08 | 54.03±0.14 | 55.97±0.04 | 55.23±0.12 | 54.40±0.43 | | | #C=1 | 19.60±0.00 | 10.43±0.00 | 10.00±0.00 | 10.00±0.00 | 10.33±0.00 | 10.00±0.00 | | | #C=2 | 47.03±0.63 | 12.90±0.27 | 24.47±0.08 | 20.17±0.04 | 17.47±0.13 | 37.67±0.76 | | | #C=3 | 48.00±0.22 | 20.87±0.12 | 28.37±0.09 | 27.60±0.03 | 24.93±0.10 | 47.43±0.40 | be attributed to the dataset being less biased and the DENSE only getting a marginal 0.03% higher test accuracy. Our method is generally much more robust in all kinds of settings. ### 4.4 Ablation Study The hyper-parameter of our approach is $\lambda$ from Eq. (30), which controls variances of a priori normal distribution and guarantees $A_k$ and $B_k$ are positive semi-definite. In this part, we show results on FMNIST. All other Laplace Approximations are sensitive to the hyper-parameter $\lambda$ based on their experimental results, Table 3 shows that our approach is relatively robust. Based on our numerical results, we set $\lambda = 0.001$ by default for our method FedLPA. We also conduct the experiments when the local epochs are 10,20,50,100. More experiments are available in Appendix C.3, which shows that our methods outperform all the baselines in all kinds of scenarios without requiring extensive tuning. ### 4.5 Communication and Computation Overhead Table 3: Experimental results of different hyper-parameter $\lambda$ on FMNIST dataset. | value of $\lambda$ | 0.01 | 0.001 | 0.0001 | |-------------------|--------|--------|--------| | $\beta=0.01$ | 18.63±0.78 | 21.20±0.67 | 22.50±1.84 | | $\beta=0.05$ | 54.33±0.54 | 54.27±0.38 | 53.30±0.01 | | $\beta=0.1$ | 56.83±0.19 | 55.33±0.06 | 54.60±0.15 | | $\beta=0.3$ | 66.83±0.02 | 68.20±0.04 | 67.53±0.03 | | $\beta=0.5$ | 73.20±0.03 | 73.33±0.06 | 72.17±0.04 | | $\beta=1.0$ | 76.53±0.02 | 76.03±0.05 | 73.47±0.19 | | $\#C=1$ | 12.73±0.01 | 13.20±0.02 | 14.17±0.02 | | $\#C=2$ | 45.20±0.21 | 46.13±0.15 | 44.80±0.03 | | $\#C=3$ | 58.97±0.07 | 57.90±0.06 | 55.60±0.06 | We conduct experiments on CIFAR-10 on a single 2080Ti GPU to estimate the overall communication and computation overhead. We set the number of clients is 10. Table 4 shows the numerical results on FedLPA and other baseline approaches. Details of the overhead evaluation are referred to Appendix G.8 and G.9. Our observations reveal that FedLPA is slightly slower than Fednova, SCAFFOLD, Fedavg, and Fedprox, while much faster than the state-of-art data-free approach DENSE. Note that FedLPA has significantly improved the one-shot learning performance of the above four approaches. Similarly, FedLPA performs moderately incremental communication overhead while outperforming other baseline approaches on learning performance. It’s noteworthy that FedLPA strikes a favorable balance between computation and communication overhead, making it the most promising approach for one-shot FL. Table 4: Communication and computation overhead evaluation. | | Overall Computation (mins) | Overall Communication (MB) | |----------------|----------------------------|----------------------------| | FedLPA | 65 | 2.77 | | Fednova | 50 | 2.47 | | SCAFFOLD | 50 | 4.94 | | Fedavg | 50 | 2.47 | | Fedprox | 75 | 2.47 | | DENSE | 400 | 2.47 | Figure 2: Extension to multiple rounds on MNIST dataset. 4.6 Supplementary Experiments Extension to Multiple Rounds. We conduct experiments on MNIST with 10 clients and data partitioning $p_k \sim \text{Dir}(\beta = 0.5)$. The results are shown in Figure 2. As DENSE could not support multiple rounds, we compare our methods with Fedavg, Fednova, SCAFFOLD, and Fedprox. FedLPA achieves the highest accuracy in the first round, denoting the strongest learning capabilities in a one-shot setting. With the increment in the number of rounds, the performances of FedLPA increase slower than the other baseline approaches. This figure shows that the joint approach (ours (one round) then Fedavg) that utilizes FedLPA in the first round and then adopts other baseline methods may be most promising to save communication and computation resources in the multiple-round federated learning scenario. Some experiments in extreme settings (the number of clients=5, $\beta = 0.001$) and an aggregation visualization can be found in Appendix C. 5 Conclusions In this work, we design a novel one-shot FL algorithm FedLPA to better model the global parameters in personalized one-shot federated learning. We propose a method that could aggregate the local clients in a layer-wise manner with their posteriors approximation via block-diagonal empirical fisher information matrices, which could effectively capture the accurate statistics of local biased dataset. Our extensive experiments show that FedLPA significantly outperforms other baselines in terms of accuracy under various settings, doing so both effectively and efficiently. Overall, FedLPA stands out as the most practical framework that conducts data-free one-shot FL, particularly well-suited for high data heterogeneity and preserving privacy without extra information leakage. REFERENCES Maruan Al-Shedivat, Jennifer Gillenwater, Eric Xing, and Afshin Rostamizadeh. Federated learning via posterior averaging: A new perspective and practical algorithms. *arXiv preprint arXiv:2010.05273*, 2020. Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. In *International conference on machine learning*, pp. 173–182. PMLR, 2016. Frederik Benzing. Gradient descent on neurons and its link to approximate second-order optimization. In *International Conference on Machine Learning*, pp. 1817–1853. PMLR, 2022. Aleksandar Botev, Hippolyt Ritter, and David Barber. Practical gauss-newton optimisation for deep learning. In *International Conference on Machine Learning*, pp. 557–565. PMLR, 2017. Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, and Qi Tian. Data-free learning of student networks. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 3514–3522, 2019. Hong-You Chen and Wei-Lun Chao. Fedbe: Making bayesian model ensemble applicable to federated learning. *arXiv preprint arXiv:2009.01974*, 2020. Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. Laplace redux-effortless bayesian deep learning. *Advances in Neural Information Processing Systems*, 34:20089–20103, 2021. Yiqun Diao, Qinbin Li, and Bingsheng He. Towards addressing label skews in one-shot federated learning. In *The Eleventh International Conference on Learning Representations*, 2023. Roger Grosse and James Martens. A kronecker-factored approximate fisher matrix for convolution layers. In *International Conference on Machine Learning*, pp. 573–582. PMLR, 2016. Neel Guha, Ameet Talwalkar, and Virginia Smith. One-shot federated learning. *arXiv preprint arXiv:1902.11175*, 2019. Clare Elizabeth Heinbaugh, Emilio Luz-Ricca, and Huajie Shao. Data-free one-shot federated learning under very high statistical heterogeneity. In *The Eleventh International Conference on Learning Representations*, 2022. Divyansh Jhunjhunwala, Shiqiang Wang, and Gauri Joshi. Towards a theoretical and practical understanding of one-shot federated learning with fisher information. In *Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities*, 2023. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. *Foundations and Trends® in Machine Learning*, 14(1–2):1–210, 2021. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In *International Conference on Machine Learning*, pp. 5132–5143. PMLR, 2020. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. *Proceedings of the national academy of sciences*, 114(13):3521–3526, 2017. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Communications of the ACM*, 60(6):84–90, 2017.
80faVLl6ji
- “For each prompt, we generate one sample considering the annotation cost. We claim that the models should generate natural text-matching motion most of the time so that the one-sample setting would not hurt the fidelity of our user study.” I might misunderstand the statement, but I don’t think just one sample is enough to make considerations on the general behaviour of the method
Bridging the Gap between Human Motion and Action Semantics via Kinematic Phrases Anonymous authors Paper under double-blind review Abstract The goal of motion understanding is to establish a reliable mapping between motion and action semantics, while it is a challenging many-to-many problem. An abstract action semantic (i.e., walk forwards) could be conveyed by perceptually diverse motions (walk with arms up or swinging), while a motion could carry different semantics w.r.t. its context and intention. This makes an elegant mapping between them difficult. Previous attempts adopted direct-mapping paradigms with limited reliability. Also, current automatic metrics fail to provide reliable assessments of the consistency between motions and action semantics. We identify the source of these problems as the significant gap between the two modalities. To alleviate this gap, we propose Kinematic Phrases (KP) that take the objective kinematic facts of human motion with proper abstraction, interpretability, and generality characteristics. Based on KP as a mediator, we can unify a motion knowledge base and build a motion understanding system. Meanwhile, KP can be automatically converted from motions and to text descriptions with no subjective bias, inspiring Kinematic Prompt Generation (KPG) as a novel automatic motion generation benchmark. In extensive experiments, our approach shows superiority over other methods. Our code and data would be made publicly available. 1 Introduction Human motion understanding has a wide range of applications, including autonomous driving (Paden et al., 2016), robotics (Koppula & Saxena, 2013), and automatic animation (Van Welbergen et al., 2010), making it increasingly attractive. The core of human motion understanding is to establish a mapping between the motion space and the action semantics space. The motion space indicates a space of sequential 3D human representations, e.g., 3D pose or SMPL (Loper et al., 2015)/SMPL-X (Pavlakos et al., 2019) parameter sequence, while the action semantic space can be represented as action categories or sentences described by natural language. Recently, a growing focus has been on generative mapping from semantics to motion, including action category-based generation (Petrovich et al., 2021) and text-based generation (Petrovich et al., 2022; Guo et al., 2022a; Lucas et al., 2022; Zhang et al., 2022; Tevet et al., 2022b; Chen et al., 2023; Zhang et al., 2023a). Most of them typically build a mapping that links motion and semantics either directly or via motion latent, with understated concerns for intermediate motion-semantic structures. However, these models suffer from inferior reliability. They cannot guarantee they generated correct samples without human filtering. Additionally, the existing evaluation of motion generation is problematic. Widely adopted FID and R-Precision rely on the latent space from a black-box pre-trained model, which might fail to out-of-distribution (OOD) and over-fitting cases. There is a long-standing need for an evaluation method that can cheaply and reliably assess whether a generated motion is consistent with particular action semantics. We identify the essence of these as the significant gap between raw human motion and action semantics, which makes direct mapping hard to learn. As in Fig. 1, an action semantics can correspond to diverse motions. For instance, a person could walk in countless ways with diverse motions, either with arms up or swinging, while action semantics tend to abstract these away from a walking motion. Additionally, they are robust against small perturbations, while motion is more specific and complex, with representations changing vastly when perturbed or mis-captured. Moreover, a motion sequence could have diverse semantics w.r.t. contexts. Modeling this many-to-many mapping between motion and semantics is challenging. Figure 1: The huge gap between motion and action semantics results in the many-to-many problem. We propose Kinematic Phrases (KP) as an intermediate to bridge the gap. KPs objectively capture human kinematic cues. It properly abstracts diverse motions with interpretability. As shown, the Phrases in the yellow box could capture key patterns of walk for diverse motions. To bridge this gap between motion and action semantics, we propose Kinematic Phrases (KP), an interpretable intermediate representation. KP focuses on the objective kinematic facts, which are usually omitted by general action semantics, like left-hand moving forwards then backward. KP is designed as qualitative categorical representations of these facts. For objectivity and actuality, KP captures sign changes with minimal pre-defined standards. Inspired by previous studies on kinematic human motion representation (von Laban & Lange [1975], Bartlett [1997]), KP is proposed as six types shown in Fig. 1 covering joint positions, joint pair positions and distances, limb angles and directions, and global velocity. Note that, although KP can be described by natural language, a major difference is that KP is strictly dedicated to objective kinematic facts instead of coarse actions such as surrender or fine-grained actions like raise both hands. We highlight three advantages of KP. First, KP offers proper abstraction, which disentangles motion perturbations and semantics changes, easing the learning process. Even though the motion differs significantly, KP manages to capture walk patterns easily. Second, KP is interpretable, as it can be viewed as instructions on executing the action, making it easily understandable to humans. Finally, KP is general, as it can be automatically extracted from different modalities of human motion, including skeleton and SMPL parameters. The conversion from KP to text is also effortless. With KP as an intermediate representation, we first construct a unified large-scale motion knowledge base. Then, to fully exploit KP and the knowledge base, we build a motion understanding system with KP mediation. In detail, we learn a motion-KP joint latent space in a self-supervised manner and then adopt it for multiple motion understanding applications, including motion interpolation, modification, and generation. Moreover, leveraging the interpretability of KP, we propose a benchmark called Kinematic Prompts Generation (KPG), which generates motion from text prompts converted from KPs. Thanks to the consistency and convenience of the KP-to-text conversion, KPG enables reliable and efficient motion generation evaluation. Our contributions are: (1) We propose KP as an intermediate representation to bridge the gap between motion and action semantics. (2) We build a novel motion understanding system using KP and the aggregated large-scale knowledge base. (3) We propose KPG as a benchmark for reliable and efficient motion generation evaluation. Promising results are achieved on motion interpolation and generation tasks. Moreover, extensive user studies are conducted, verifying the efficacy of our methods, also the consistency between KPG evaluation and human perception. 2 RELATED WORKS Motion Representation. An intuitive motion representation is a sequence of static pose representations, like joint locations and limb rotations. Efforts are paid to address the discontinuity of rotation for deep-learning methods (Zhou et al., 2019; Brégier, 2021). Recent works on parametric body models (Loper et al., 2015; Pavlakos et al., 2019) enable a more realistic body representation. Meanwhile, Pons-Moll et al. (2014) proposed Posebits, representing pose with boolean geometric part relationships. Delmas et al. (2022, 2023) translates Posebits into text descriptions. These abstract representations are flexible and insensitive to little perturbations, but their static nature ignores motion dynamics. Tang et al. (2022) acquire similar fine-grained descriptions from human annotation, while Xiang et al. (2022); Athanasiou et al. (2023) adopted large-scale language models. However, few recognize their potential in bridging the low-level motion and the high-level action semantics. Phase functions (Holden et al., 2020), Labanotations (von Laban & Lange, 1975), and learned Motion Words (Aristidou et al., 2018) were also explored, though limited to specific actions like locomotion and dancing. Motion Generation can be conditioned by its prefix/suffix (Hernandez et al., 2019; Athanasiou et al., 2022; Guo et al., 2023), action categories (Petrovich et al., 2021; Guo et al., 2020; Xu et al., 2023), or audio (Li et al., 2021a,b). Text-based motion generation has developed rapidly with the proposal of text-motion datasets (Punnakkal et al., 2021; Guo et al., 2022a; Petrovich et al., 2022; Guo et al., 2022a; Qian et al., 2023) used VAEs, while Tevet et al. (2022a); Hong et al. (2022); Lin et al. (2023b) extended the CLIP (Radford et al., 2021) space to motion. Recently, attention has been paid to diffusion models (Zhang et al., 2022; Tevet et al., 2022b; Dabral et al., 2023; Wang et al., 2023). Azadi et al. (2023) adopted a U-Net structure. Zhang et al. (2023b); Petrovich et al. (2023) explored retrieval-based methods. Karunratanakul et al. (2023) aimed at controllable generation, while Yuan et al. (2023) introduced physical constraints. However, most approaches still suffer from the gap between motion and action semantics. Lucas et al. (2022); Guo et al. (2022b); Zhang et al. (2023a); Chen et al. (2023); Zhou & Wang (2023); Zhong et al. (2023); Kong et al. (2023) adopted (VQ-)VAE-compressed motion representation as mediation, while in the current data-limited situation, we identify that this single-modality compression might be sub-optimal. Instead, KP could alleviate this by introducing explicit semantic-geometric correlation. 3 KINEMATIC PHRASE BASE 3.1 KINEMATIC PHRASES Kinematic Phrases abstract motion into objective kinematic facts like left-hand moves up qualitatively. We take inspiration from previous kinematic motion representations (von Laban & Lange, 1975) and qualitative static pose representations (Delmas et al., 2022; Pons-Moll et al., 2014), proposing six types of KP to comprehensively represent motion from different kinematic hierarchies: For joint movements, there are 36 Position Phrases (PPs). For joint pair movements, there are 242 Pairwise Relative Position Phrases (PRPPs) and 81 Pairwise Distance Phrases (DPDs). For limb movements, there are 8 Limb Angle Phrases (LAPs) and 33 Limb Orientation Phrases (LOPs). For whole-body movements, there are 3 Global Velocity Phrases (GVPs). KP extraction is based on a skeleton sequence $X = \{x_i | x_i \in \mathbb{R}^{n_k \times 3}\}_{i=1}^t$, where $n_k$ is the number of joints ($n_k = 17$ here), $x_i$ is the joint coordinates at $i$-th frame, and $t$ is the sequence length. Note that $x_i^0$ indicates the pelvis/root joint. For each Phrase, a scalar indicator sequence is calculated from the skeleton sequence. Phrases are extracted as per-frame categorical representations w.r.t. indicator signs. Unlike previous efforts (Pons-Moll et al., 2014; Delmas et al., 2022), we limit the criteria of KP as the indicator signs to minimize the need for human-defined standards (e.g., numerical criteria on the closeness of two joints) for objectivity and actuality. Fig. 2 illustrated the extraction procedure. Reference Vectors are first constructed, indicating right, upward, and forward directions from a human cognitive view. We aim at the egocentric reference frames that human tends to use when performing actions. The negative direction of gravity is adopted as upward vector $r_u$, the vector from left hip to right hip is adopted as right vector $r_r$, and the forward vector is calculated as $r_f = r_u \times r_r$. These vectors of each frame are denoted as $R = \{r_i\}_{i=1}^t$. Position Phrase (PP) focuses on the movement direction of joint $x_j$ w.r.t. reference vector $R$. The indicator for PP at $i$-th frame is calculated as $$s_{i,j} = \langle (x_{ij} - x_i^0), r_i \rangle - \langle (x_{i-1,j} - x_{i-1}^0), r_{i-1} \rangle.$$ (1) The sign of $s_{i,j}$ categorizes PP into moving along/against $R$, or relatively static along $R$ for indicators with small amplitudes. After filtering, 36 different PPs are extracted. Figure 2: Six types of KP from four kinematic hierarchies are extracted from a motion sequence. A scalar indicator $s_i$ is calculated per Phrase per frame. Its sign categorizes the corresponding Phrase. **Pairwise Relative Position Phrase (PRPP)** describes the relative position between a pair of joints $(x^j, x^k)$ w.r.t. reference vector $R$. PRPP indicator at $i$-th frame is $s^{(j,k,i)}_i = \langle (x^j_i - x^k_i), r_i \rangle$. For $(L\text{-Hand}, R\text{-Hand})$ and forward vector $R^f$, PRPP could be $L\text{-Hand}$ behind/in front of $R\text{-Hand}$ according to the sign of $s^{(j,k,i)}_i$. After filtering, 242 PRPPs are extracted. **Pairwise Distance Phrase (PDP)** describes how the L2 distance between a pair of joints $(x^j, x^k)$ changes. The indicator for PDP is calculated as $$s^{(j,k)}_i = \|x^j_i - x^k_i\|_2 - \|x^j_{i-1} - x^k_{i-1}\|_2.$$ (2) The sign of $s^{(j,k)}_i$ categorizes PDP into moving closer/away, or relatively static. After dropping joint pairs in the skeleton topology, such as the hand and elbow, 81 PDPs are extracted. **Limb Angle Phrase (LAP)** targets at the change of bend angle between two connected limbs $(x^j, x^k)$ and $(x^l, x^i)$. The indicator for LAP is calculated as $$s^{(j,k,l)}_i = \arccos(\langle x^k_i - x^j_i, x^l_i - x^j_i \rangle) - \arccos(\langle x^k_{i-1} - x^j_{i-1}, x^l_{i-1} - x^j_{i-1} \rangle).$$ (3) LAP describes the limb chain $(x^j, x^k)$-$(x^l, x^i)$ as bending or unbending. 8 LAPs are extracted. **Limb Orientation Phrase (LOP)** describes the orientation of the limb $(x^j, x^k)$ w.r.t. $R$. Note that $x^k$ is the distal limb. The scalar indicator for LOP is calculated as $s^{(j,k,i)}_i = \langle x^k_i - x^j_i, r_i \rangle$. The sign of $s^{(j,k,i)}_i$ categorizes the LOP into limb $(x^j, x^k)$ pointing along/against $R$, or a placeholder category for those with little magnitude. 33 LOPs are extracted. **Global Velocity Phrase (GVP)** describes the direction of global velocity with respect to $R$. The indicator is calculated as $s^i = \langle x^i_{i+1} - x^i_i, r_i \rangle$. The three categories are moving along/against $R$, or static along $R$ according to the sign of $s^i$. These result in 403 Phrases in total, covering motion diversity and distribution from various levels. While we clarify that these Phrases do not rule out the possibility of other possible useful potentials. ### 3.2 Constructing Kinematic Phrase Base KP enables us to unify motion data with different formats to construct a large-scale knowledge base containing motion, text, and KP. Motion sequences of different representations are collected, including 3D skeleton sequences and SMPL (Loper et al., 2015)/SMPL-X (Pavlakos et al., 2019) parameter sequences. The sequences are first re-sampled to 30Hz and rotated so that the negative direction of the z-axis is the gravity direction. Then, the sequences are converted into 3D skeleton sequences for KP extraction as in Sec. 3.1. Text annotations attached to the sequences are directly saved. For sequences with action category annotation, the category name is saved. For those with neither text nor action category, the text information is set from its attached additional information, like objects for SAMP (Hassan et al., 2021). Finally, we collect 87k motion sequences from 11 datasets. Detailed statistics are shown in Tab. 1. More details are included in the appendix. 4 MOTION UNDERSTANDING VIA KP By motion understanding, we mean both low-level understanding like interpolation and modification, and high-level understanding like generative mapping from text to motion. To achieve this, we first learn a motion-KP joint space with less ambiguity and more interpretability. Then, with this space, we introduce its application to both low-level and high-level motion-semantics understanding. 4.1 PRELIMINARIES We first introduce the representation for motion and KP. Motion is represented as a human pose sequence with \( n \) frames as \( M = \{m_i\}_{i=1}^{n} \). In detail, SMPL (Loper et al., 2015) pose parameters are transformed from axis-angle format to the 6D continuous representation (Zhou et al., 2019), then concatenated with the velocity of the root joint, resulting in a 147-dimensional representation per frame. KP is represented by signs of the indicators. 4.2 JOINT SPACE LEARNING Model Structure. An overview of our model is illustrated in Fig. 3. Motion VAE is a transformer-based VAE adapted from Petrovich et al. (2021). The encoder \( E_m \) takes motion \( M \) and two distribution tokens \( m_\mu, m_\sigma \) as input, and the outputs corresponding to the distribution tokens are taken as the \( \mu_m \) and \( \sigma_m \) of the Gaussian distribution. Then, the transformer decoder \( D_m \) takes \( z_m \sim G(\mu_m, \sigma_m) \) as \( K, V \), and a sinusoidal positional encoding of the expected duration as \( Q \). The output is fed into a linear layer to obtain the reconstructed motion sequence \( \hat{M} \). KP VAE with encoder \( E_p \) and decoder \( D_p \) resembles Motion VAE. The sign of \( D_p \) output is adopted as the predicted KP \( \hat{C} \). Notice that the decoders \( D_m, D_p \) could take arbitrary combinations of \( z_m, z_p \) as input, outputting \( \hat{M}, \hat{C} \). Self-supervised Training. With the VAEs, we propose a self-supervised training strategy to learn motion-KP joint space. As a coherent representation, the overall representation should not change drastically with a small portion of KP unknown. Even more, the missing Phrases should be recovered from existing Phrases. In this view, we randomly corrupt samples during training by setting a small portion of KP as 0. The training is thus executed in a self-supervised manner. This helps mine the correlation among different Phrases while also effectively increasing the robustness of the joint space. Similar to TEMOS (Petrovich et al., 2022), four losses are adopted: reconstruction loss, KL divergence loss, distribution alignment loss, and embedding alignment loss. 4.3 KP-MEDIATED MOTION UNDERSTANDING With the joint space, we can perform both low-level and high-level motion understanding with KP mediation. We introduce three applications to show the capability of KP, as shown in Fig. 3. | Dataset | Mot. Rep. | #Seqs | #Actions | Text | |---------|-----------|-------|----------|------| | AMASS | Mahmood et al. (2019) | SMPL-X | 26k | 260 | ✓ | | GRAB | Tänter et al. (2020) | SMPL-X | 1k | 4 | ✓ | | SAMP | Hassan et al. (2021) | SMPL-X | 0.2k | N/A | ✓* | | FitSD | Fieraru et al. (2021) | SMPL-X | 0.4k | 29 | ✓ | | CH3D | Fieraru et al. (2020) | SMPL-X | 0.4k | 8 | ✓ | | UESTC | Ji et al. (2018) | SMPL | 26k | 40 | ✓ | | AIST++ | Li et al. (2021a) | SMPL | 1k | N/A | ✓* | | BEHAVE | Bhattachagar et al. (2022) | SMPL | 0.3k | N/A | ✓* | | HuMMan | Cai et al. (2022) | SMPL | 0.3k | 339 | ✓ | | GTAHuman | Cai et al. (2021) | SMPL | 20k | N/A | x | | Motion-X | Lin et al. (2023a) | SMPL-X | 65k | N/A | ✓ | Table 1: Statistics of Kinematic Phrase Base. Mot. Rep. indicates motion representation. “✓*” means texts are generated from the attached additional information instead of human annotation. Figure 3: We train motion-KP joint latent space in a self-supervised training manner. KP is randomly masked during training. Reconstruction and alignment losses are adopted. The joint space could be applied for multiple tasks, including motion interpolation, modification, and generation. **KP-mediated Motion Interpolation** Given a corrupted motion sequence $\tilde{M}$, we extract its corresponding KP sequence $\tilde{C}$, then feed them to encoders $E_m$, $E_p$ and decoder $D_p$, resulting in the estimated KP sequence $\hat{C}$. $\tilde{C}$ and $\tilde{M}$ are fed into $E_m$, $E_p$ and $D_m$, resulting in interpolated $\hat{M}$. **Motion Modification** Motion modification functions similarly. Motion $M$ is first extracted into KP sequence $C$. Modifications could be made on $C$ resulting in $\tilde{C}$. Modified motion frames are then masked, getting $\tilde{M}$. $\tilde{M}, \tilde{C}$ are fed into $E_m$, $E_p$ and $D_m$, getting the interpolated $\hat{M}$. **KP-mediated Motion Generation**. Given text $t$, to generate a motion sequence from it, we first encode it into latent $z_t$ with CLIP text encoder $E_t$. Direct mapping could be achieved by training the motion decoder $D_m$ for $\hat{M} = D_m(z_t)$. We show that the direct mapping could be impressively improved with our joint space in Sec 6.4. With KP, we could perform a novel KP-mediated motion generation. We adopt a vanilla latent diffusion paradigm for KP-mediated text-to-motion tasks. An extra denoiser is trained to denoise a random noise $z_T^p$ to KP latent $z_p = z_0^p$ with $T$ diffusion steps. We then decode KP sequence $\hat{C}$ from $z_p$ with $D_p$. Then, $\hat{C}$ is encoded by $E_p$, getting distribution $G(\mu_p, \sigma_p)$. $z_p$ is sampled and sent to $D_m$ to generate a motion sequence. Experiments show that KP could be a promising stepping stone to mitigate the huge gap from action semantics to motion. 5 KINEMATIC PROMPT GENERATION With the interpretability and objectivity of KP, we propose a new motion generation benchmark. Before that, we first analyze current benchmarks. A crucial aspect of motion generation evaluation is motion-semantic consistency. The gold standard is user study. However, it is expensive and inefficient to scale. Early metrics like MPJPE (Mean Per Joint Position Error) and MAE (Mean Angle Error) mechanically calculate the error between the generated and GT samples. These metrics fail to reveal the real ability of generative models: What if the models memorize GT samples? Or what if the samples are diverse from GT but also true? FID (Frechet Inception Distance) is adopted to mitigate this issue. However, it provides a macro view of the quality of all generated samples without guarantees for individual samples. Guo et al. (2022a) proposed R-Precision, using a pre-trained text-motion matching model to examine whether the generated samples carry true semantics. They both rely on the latent space from a black-box pre-trained model, which is not credible. Besides, models might learn short paths to over-fit the pre-trained model. Moreover, since automatic mapping from motion to semantics across their huge gap is still an unsettled problem, adopting it to evaluate motion generation is not a decent choice. Moreover, most current motion generation evaluations are performed on datasets (Guo et al., 2022a; Plappert et al., 2016; Ji et al., 2018) with considerable complex everyday actions, further increasing the difficulty. To this end, we propose a novel benchmark: Kinematic Prompts Generation (KPG). Instead of previous benchmarks focusing on everyday activities or sports, we take a step back in the complexity of the target action semantics. Based on KP, KPG focuses on evaluating whether the models could generate motion sequences consistent with specific kinematic facts given text prompts. In detail, we convert KP into text prompts with templates as in Tab. 2 resulting in 840 text prompts. Given prompt $T_i \in T$ from Phrase $c_i$, the model generates motion $\hat{M}_i$, along with extracted KP $\hat{C}_i$. We calculate Accuracy as $$Acc = \frac{1}{|T|} \sum_{T_i \in T} 1[c_i \in \hat{C}_i],$$ where $1[\cdot] = 1$ if the expression in $[\cdot]$ is True, otherwise 0. Note that, for $c_i \in \hat{C}_i$, $c_i$ should keep for more than 5 consecutive frames to avoid trivial perturbations. Accuracy examines whether the Phrase corresponding to the given prompt appears in the KP sequence converted from generated motion. The calculation involves no black-box model thanks to KP, presenting a fully reliable evaluation pipeline. Also, with the effortless motion-to-KP conversion, the computation could be conducted automatically. More details are in the appendix. ### 6 EXPERIMENT #### Implementation Details. HumanML3D (Guo et al., 2022a) test split is held out for evaluation, with the rest of KPB for training. During training, the motion sequences are canonicalized by eliminating the rotation along the z-axis in the first frame, and the same counter-rotation is applied to the following frames. Sequences are sampled to 15 FPS and randomly clipped into short clips with lengths between 30 frames and 150 frames. The batch size is set as 288, and an AdamW optimizer with a learning rate of 1e-4 is adopted. We randomly corrupt less than 20% of the Phrases for a sample. The Motion-KP joint space is trained for 6,000 epochs. While the text-to-motion latent diffusion model is trained for 3,000 epochs, with the joint space frozen. All experiments are conducted in 4 NVIDIA RTX 3090 GPUs. More details are provided in the appendix. #### 6.1 Motion Interpolation Following Jiang et al. (2023), 50% frames are randomly masked for interpolation evaluation. FID and Diversity are also evaluated. We adopt MDM (Tevet et al., 2022b) as the baseline. In Tab. 3 our method provides better FID. While with additional KPB, the Diversity is increased. #### 6.2 Motion Generation **Settings.** We adopt the HumanML3D test set (Guo et al., 2022a) for conventional text-to-motion evaluation. The evaluation model from Guo et al. (2022a) is adopted to calculate R-Precision, FID, Diversity, and Multimodality. KPG is also adopted, with the proposed Accuracy. Also, Diversity is computed as a reference. We run the evaluation 20 times and report the average metric value. Details are given in the appendix. **Results on conventional text to motion** are shown in Tab. 3. Our method is competitive without KPB. However, KPB brings a counter-intuitive performance drop. To evaluate this, we further conduct a user study to make human volunteers judge the motions instead of a proxy neural network. Our user study is different from previous efforts in two aspects. First, instead of testing a small set of text prompts (less than 50 in previous works (Tevet et al., 2022b; Chen et al., 2023)), we randomly select 600 sentences from the HumanML3D test set. By scaling up, the result is convincing in reflecting the ability to generate motion for diverse text inputs. Second, neither asking the volunteers to give a general rating for each sample nor to choose between different samples, we ask them two questions: 1) Do the motion and the text match? and 2) Is the motion natural? For Q1, three choices are given as “No, Partially, Yes”. For Q2, two choices are given as “Yes, No”. In this way, we explicitly decouple the evaluation of text-to-motion into semantic consistency and naturalness, corresponding to R-Precision and FID. For each prompt, we generate one sample considering the | Methods | Motion Interpolation | Motion Generation | |------------------|----------------------|-------------------| | | FID↓ Diversity → R-P@1↑ | FID↓ Diversity → Multimodality | | GT | 0.002 | 9.503 | | TEMOS (Petrovich et al., 2022) | - | 0.424 | | T2M (Guo et al., 2022) | - | 0.455 | | MDM (Tevet et al., 2022b) | 2.698 | 8.42 | | TM2T (Guo et al., 2022b) | - | 0.424 | | MLD (Chen et al., 2023) | - | 0.481 | | T2M-GPT (Zhang et al., 2023a)| - | 0.492 | | MotionGPT (Zhang et al., 2023b) | 0.214 | 9.560 | | Ours* | 0.197 | 9.772 | | Ours | 0.226 | 10.022 | Table 3: Result Comparison of motion interpolation and generation on HumanML3D. R-P@1 is short for R-Precision@1. * indicates the model is trained on the HumanML3D train set only. Figure 4: User study on HumanML3D, with “Y” for Yes and “P” for partially. Figure 5: User study on KPG, with “Y” for Yes and “P” for partially. annotation cost. We claim that the models should generate natural text-matching motion most of the time so that the one-sample setting would not hurt the fidelity of our user study. 36 volunteers are invited, each reviewing 200 sequences. Thus each sequence receives 3 user reviews. Also, we compute R-precision@1 of the generated sequences for reference. MDM (Tevet et al., 2022b), T2M-GPT (Zhang et al., 2023a), MLD (Chen et al., 2023), and our method are evaluated. User study results are shown in Fig. 4. Though our method is not superior in R-Precision, we receive better user reviews, showcasing the efficacy of our KP-mediated generation strategy. Recent T2M-GPT and MLD present similar R-Precision, but only T2M-GPT manages to keep a good performance with user reviews. Moreover, the discrepancy between R-Precision and user reviews is revealed in both absolute value and trends. More results and analysis are given in the appendix. Results on KPG are shown in Tab. 4. KPG is considered an easier task than conventional text-based motion generation since it is targeted at action semantics with much less complexity. However, previous methods are not performing as well as expected. Though we managed to deliver substantial improvements, the accuracy remains below 60%, which is far from satisfying. There is a considerable gap between existing methods and ideal motion generation models. Furthermore, given the discrepancy between automatic metrics and user study as shown in Fig. 4, we conducted a similar user study with 100 randomly selected prompts from KPG involving T2M-GPT and our model. Fig. 5 demonstrates that KP-inferred Accuracy and user reviews share similar trends. We also calculate their consistency, showing KP and user study give the same reviews for 84% of the samples. We believe KPG could thus be a first step towards reliable automatic motion generation evaluation. More analyses are given in the appendix. 6.3 VISUALIZATION We first present a modification sample in Fig. 6. By modifying KP, we could edit arbitrary motion at a fine-grained level. Also, We compare generated samples of T2M-GPT and our methods in Fig. 7. Our method properly responds to text prompts with constraints on specific body parts. This could be attributed to KP mediation, which explicitly decomposes the action semantics into kinematics cues of body parts. Note that T2M-GPT might generate redundant motion for simple prompts, while our method provides more concise and precise results. More visualizations are in the appendix. | Methods | Acc.%↑ | Diversity | |------------------|--------|-----------| | HMDM [Tevet et al., 2022b] | 44.40 | 5.725 | | MLD [Chen et al., 2023] | 44.76 | 5.901 | | T2M-GPT [Zhang et al., 2023a] | 47.86 | 6.593 | | Ours | **52.14** | **6.017** | Table 4: Results on Kinematic Prompt Generation. ![Figure 6: Our model supports fine-grained modification on motion via modification on KP.](image) ![Figure 7: Visualization of generated samples. Compared to T2M-GPT, our method provides a better response to prompts with explicit constraints on specific body parts.](image) ### 6.4 Ablation Studies Ablation study results on KPG are shown in Tab. 5. **KP mediation.** By using our joint space without KP mediation, we still present a competitive result, showing the efficacy of motion-KP joint space. **Direct mapping.** By directly mapping with no KP involved, we present a similar performance compared to previous methods. It demonstrates the significance of KP in conveying action semantics. **Different KP sets.** We examine the contribution of different KP sets: joint KP (PP), joint pair KP (PRPP, PDP), limb KP (LAP, LOP), and body KP (GVP). A leave-one-out style evaluation shows the elimination of joint KP and joint pair KP results in notable performance degradation, while the influence of the rest is relatively subtle. ### 7 Discussion Here, we discuss the limitations and prospects of KP and KP-based applications. **First,** KP could be extended beyond its current criteria of sign. These criteria guarantee objectivity but overlook important kinematic information like movement amplitude and speed. Also, due to the granularity of the adopted skeleton, fine-grained kinematic information on fingers is not well-preserved. The exploration of amplitude/speed/finger-based KP would be a promising goal to pursue. **Second,** KPB could be extended to datasets with other modalities, like 2D pose and egocentric action datasets. Though these modalities provide incomplete 3D information, we could extract KP that is credibly accessible across modalities. **Third,** with the convenient conversion from KP to text, auxiliary text descriptions could be automatically generated for motions via KP. **Fourth,** KPG could be extended by paraphrasing existing prompts and combining different Phrases. ### 8 Conclusion In this paper, we proposed an intermediate representation to bridge human motion and action semantics as the Kinematic Phrase. By focusing on objective kinematic facts of human motion, KP achieved proper abstraction, interpretability, and generality. A motion understanding system based on KP was proposed and proven effective in motion interpolation, modification, and generation. Moreover, a novel motion generation benchmark Kinematic Prompt Generation is proposed. We believe that KP has great potential for advancing motion understanding. REFERENCES Andreas Aristidou, Daniel Cohen-Or, Jessica K. Hodgins, Yiorgos Chrysanthou, and Ariel Shamir. Deep motifs and motion signatures. *ACM Trans. Graph.*, 37(6):187:1–187:13, November 2018. doi: 10.1145/3272127.3275038. URL http://doi.acm.org/10.1145/3272127.3275038. Nikos Athanasiou, Mathis Petrovich, Michael J Black, and Gül Varol. Teach: Temporal action composition for 3d humans. In *2022 International Conference on 3D Vision (3DV)*, pp. 414–423. IEEE, 2022. Nikos Athanasiou, Mathis Petrovich, Michael J. Black, and Gül Varol. SINC: Spatial composition of 3D human motions for simultaneous action generation. In *ICCV*, 2023. Samaneh Azadi, Akbar Shah, Thomas Hayes, Devi Parikh, and Sonal Gupta. Make-an-animation: Large-scale text-conditional 3d human motion generation. In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 15039–15048, October 2023. R. Bartlett. *Introduction to Sports Biomechanics*. Introduction to Sports Biomechanics. E & FN Spon, 1997. ISBN 9780419208402. URL https://books.google.com.tw/books?id=6Db8mgxsgQC. Bharat Lal Bhatnagar, Xianghui Xie, Ilya Petrov, Cristian Sminchisescu, Christian Theobalt, and Gerard Pons-Moll. Behave: Dataset and method for tracking human object interactions. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*. IEEE, jun 2022. Romain Brégier. Deep regression on manifolds: a 3d rotation case study. In *2021 International Conference on 3D Vision (3DV)*, pp. 166–174. IEEE, 2021. Zhongang Cai, Mingyuan Zhang, Jiawei Ren, Chen Wei, Daxuan Ren, Zhengyu Lin, Haiyu Zhao, Lei Yang, and Ziwei Liu. Playing for 3d human recovery. *arXiv preprint arXiv:2110.07588*, 2021. Zhongang Cai, Daxuan Ren, Ailing Zeng, Zhengyu Lin, Tao Yu, Wenjia Wang, Xiangyu Fan, Yang Gao, Yifan Yu, Liang Pan, Fangzhou Hong, Mingyuan Zhang, Chen Change Loy, Lei Yang, and Ziwei Liu. Humman: Multi-modal 4d human dataset for versatile sensing and modeling. In Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner (eds.), *Computer Vision – ECCV 2022*, pp. 557–577, Cham, 2022. Springer Nature Switzerland. Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, and Gang Yu. Executing your commands via motion diffusion in latent space. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 18000–18010, 2023. Rishabh Dabral, Muhammad Hamza Mughal, Vladislav Golyanik, and Christian Theobalt. Mo-fusion: A framework for denoising-diffusion-based motion synthesis. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 9760–9770, June 2023. Ginger Delmas, Philippe Weinzaepfel, Thomas Lucas, Francesc Moreno-Noguer, and Grégory Rogez. Posescript: 3d human poses from natural language. In *Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VI*, pp. 346–362. Springer, 2022. Ginger Delmas, Philippe Weinzaepfel, Francesc Moreno-Noguer, and Grégory Rogez. Posefix: Correcting 3d human poses with natural language. *arXiv preprint arXiv:2309.08480*, 2023. Mihai Fieraru, Mihai Zanfir, Elisabeta Oneata, Alin-Ionut Popa, Vlad Olaru, and Cristian Sminchisescu. Three-dimensional reconstruction of human interactions. In *The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, June 2020. Mihai Fieraru, Mihai Zanfir, Silviu-Cristian Pirlea, Vlad Olaru, and Cristian Sminchisescu. Aifit: Automatic 3d human-interpretable feedback models for fitness training. In *The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, June 2021.
xY4861TVUc
As mentioned in step 2 of section 3.3, the proposed method does not implicitly constrain the similarity. Why not constrain the similarity? How about if it is constrained with similarity? And I guess not constraining the similarity can bring the benefits that the model adopted in step 2 can be directly applied without fine-tuning to satisfy the similarity constraint. Is it correct?
Towards the Vulnerability of Watermarking Artificial Intelligence Generated Content Anonymous authors Paper under double-blind review Abstract Artificial Intelligence Generated Content (AIGC) is gaining great popularity in social media, with many commercial services available. These services leverage advanced generative models, such as latent diffusion models and large language models, to generate creative content (e.g., realistic images, fluent sentences) for users. The usage of such generated content needs to be highly regulated, as the service providers need to ensure the users do not violate the usage policies (e.g., abuse for commercialization, generating and distributing unsafe content). A promising solution to achieve this goal is watermarking, which adds unique and imperceptible watermarks on the content for service verification and attribution. Numerous watermarking approaches have been proposed recently. However, in this paper, we show that an adversary can easily break these watermarking mechanisms. Specifically, we consider two possible attacks. (1) Watermark removal: the adversary can easily erase the embedded watermark from the generated content and then use it freely without the regulation of the service provider. (2) Watermark forge: the adversary can create illegal content with forged watermarks from another user, causing the service provider to make wrong attributions. We propose WMaGi, a unified framework to achieve both attacks in a holistic way. The key idea is to leverage a pre-trained diffusion model for content processing, and a generative adversarial network for watermark removing or forging. We evaluate WMaGi on different datasets and embedding setups. The results prove that it can achieve high success rates while maintaining the quality of the generated content. Compared with existing diffusion model-based attacks, WMaGi is $5,050 \sim 11,000 \times$ faster. 1 Introduction Benefiting from the advance of generative deep learning models (Rombach et al., 2022; Touvron et al., 2023), Artificial Intelligence Generated Content (AIGC) has become increasingly famous. Many commercial services have been released, which leverage large models (e.g., ChatGPT (cha), Midjourney (Mid)) to generate creative content based on users’ demands. The rise of AIGC also leads to some legal considerations, and the service provider needs to set up some policies to regulate the usage of generated content. First, the generated content is one important intellectual property of the service provider, many services do not allow users to make the AIGC into commercial use (Touvron et al., 2023; Mid). Selling the generated content for financial profit will violate this policy and cause legal issues. Second, generative models have the potential of outputting unsafe content (Wei et al., 2023; Qi et al., 2023; Liu et al., 2023a; Le et al., 2023), such as fake news (Guo et al., 2021), malicious AI-powered images (Salman et al., 2023; Le et al., 2023), phishing campaigns (Hazell, 2023), and cyberattack payloads (Charan et al., 2023). New laws are established to regulate the generation and distribution of content from deep learning models on the Internet. As protecting and regulating AIGC become urgent, Google hosted a workshop in June 2023 to discuss the possible solutions against malicious usage of generative models (Barrett et al., 2023). Not surprisingly, the watermarking technology as mentioned as a promising defense. By adding 1https://okuha.com/best-sites-to-sell-ai-art/ 2https://www.reuters.com/technology/governments-efforts-regulate-ai-tools-2023-04-12/ 3https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework 4https://www.lexology.com/library/detail.aspx?g=42ad7be8-76bd-40c8-ae5d-271aaf3710eb invisible specific watermarks to the generated content (Fernandez et al., 2023; Kirchenbauer et al., 2023; Liu et al., 2023b), we are able to identify the misuse of AIGC and track to the corresponding users. A variety of robust watermarking methodologies have been designed, which can be classified into two categories. (1) A general method is to make the generative model learn a specific data distribution, which can be decoded by another deep learning model to obtain a secret message as the watermark (Fernandez et al., 2023; Liu et al., 2023b; Zhao et al., 2023b). (2) The model owner can concatenate a watermark embedding model (Zhu et al., 2018; Tancik et al., 2020) after the generative model to make the final output contain watermarks. A very recent work from DeepMind, SynthID Beta (Syn), detects AI-generated images by adding watermarks to generated images\(^5\). According to its description, this service possibly follows a similar strategy as StegaStamp (Tancik et al., 2020), which adopts an encoder to embed watermarks into images and a decoder to identify the embedded watermarks in the given images. The Google workshop (Barrett et al., 2023) reached the consensus that “existing watermarking algorithms only withstand attacks when the adversary has no access to the detection algorithm”, and embedding a watermark to a clean image or text “seems harder for the attacker, especially if the watermarking process involves a secret key”. However, in this paper, we argue that it is not the case. We find that it is easy for an adversary without any prior knowledge to remove or forge the embedded secret watermark in AIGC, which will break the IP protection and content regulation. Specifically, (1) a watermark removal attack makes the service providers fail to detect the watermarks it embeds into the AIGC previously, so the malicious user can circumvent the policy regulation and abuse the content for any purpose. (2) A successful watermark forge attack can intentionally embed a watermark of another user into the unsafe content without the knowledge of the secret key. This could lead to wrong attributions and frame up that benign user. We introduce WMaGi, a novel framework to achieve both watermark removal and forge attacks against AIGC in a unified manner. The key idea is to leverage a pre-trained diffusion model and train a generative adversarial network (GAN) for erasing or embedding watermarks to AIGC. Figure 1 shows the overview of WMaGi, which consists of three steps: data collection, data pre-processing, and model training. In the first step, the adversary collects AIGC from the target service or a specific user. We assume that the adversary can only collect the watermarked AIGC, without any clean content. Furthermore, we assume that the same secret watermark message is applied to all the collected data. More details of our threat model can be found in Section 3.2. In the second step, we introduce an adversarial pipeline to weaken the embedded watermark messages in the data. Specifically, the adversary adopts a public diffusion model, such as DDPM (Ho et al., 2020), to denoise the collected data. The diffusion model can be either non-watermarked, or watermark-protected with a different secret. Its preprocessing operation can make the embedded message unrecoverable from the denoised data. In the third step, the adversary trains a GAN model to map the data distribution from collected data to denoised data (for watermark removal) or from denoised data to collected data (for watermark --- \(^5\)Up to the date of writing, SynthID Beta is still a beta product only provided to a small group of users. Unfortunately, we do not have access to it. Therefore, we cannot provide evaluation results with respect to it in our experiments. forging). After the model is trained, the adversary can adopt the generator to remove or forge the specific watermark for AIGC, depending on the target in the third step. We evaluate our proposed WMaGi on various datasets (e.g., CIFAR-10, CelebA), and settings (e.g., different watermark lengths, few-shot learning), to show its generalizability. Our results prove that the adversary can successfully remove or forge a specific watermark in the AIGC and keep the content indistinguishable from the original one. This provides concrete evidence that existing watermarking schemes are not reliable, and the community needs to explore more robust watermarking methods. Overall, our contribution can be summarized as follows: • To the best of our knowledge, it is the first work focusing on removing and forging watermarks in AIGC under a black-box threat model. Furthermore, WMaGi is a unified framework, which can achieve both attack goals in a holistic way. Our study discloses the unreliability and fragility of existing watermarking schemes. • Different from prior attacks, WMaGi does not require the adversary to have clean data or any information about the watermarking schemes, which is more practical in real-world applications. • Comprehensive evaluation proves that WMaGi can remove or forge the watermarking information without harming the data quality. WMaGi is time-efficient, which is $5,050 \sim 11,000 \times$ faster than existing attacks with diffusion models. • We prove that WMaGi is effective in the few-shot setting, i.e., it can be freely adapted to unseen watermarks. Furthermore, WMaGi remains highly effective for different watermark lengths. 2 RELATED WORKS 2.1 CONTENT WATERMARK Driven by the rapid development of large and multi-modal models, there is a renewed interest in generative models, such as ChatGPT (cha) and Stable Diffusion (Rombach et al., 2022), due to their capability of creating high-quality images (Ho et al., 2020; Rombach et al., 2022), texts (cha; Touvron et al., 2023), audios (Kong et al., 2021), and videos (Ho et al., 2022). The generated content is referred to Artificial Intelligence Generated Content (AIGC). Such AIGC can have high IP values and sensitive content. Therefore, it is important to protect and regulate it during its distribution on public platforms, e.g., Twitter (Twi) and Instagram (Ins). A typical strategy to achieve the above goal is watermarking: the service provider adds a secret and unique message to the content, which can be subsequently extracted for ownership verification and attribution. Existing watermarking schemes can be divided into post hoc methods and prior methods. Post hoc methods convert the clean content into watermarked content following one of the following two strategies. (i) Visible watermark strategy: the service provider adds characters or paintings into the clean content (Liu et al., 2021; Cheng et al., 2018), which can be recognized by humans. (ii) Invisible watermark strategy: the service provider embeds a specific bit string into the clean content by a pre-trained steganography model (Zhu et al., 2018; Tancik et al., 2020) or signal transformation (Nam et al., 2021), which will be decoded by a verification algorithm later. For prior methods, the generative model directly learns a distribution of watermarked content, which can be decoded by a verification algorithm (Fei et al., 2022; Fernandez et al., 2023; Cui et al., 2023; Zhao et al., 2023b). Specifically, Fei et al. (2022) designed a watermarking scheme for generative adversarial networks (GANs), by learning the distribution of watermarked images supervised by the watermark decoder. Fernandez et al. (2023); Zhao et al. (2023b) designed a watermarking scheme for diffusion models (Rombach et al., 2022), which embeds a predefined bit string into the generated images. The bit string can be restored with a secret decoder. Therefore, the service provider can recognize the AIGC from his generative model or determine the specific user account. In this paper, we target both post hoc methods and prior methods. For post hoc methods, we do not consider visible watermarks as they can significantly decrease the visual quality of AIGC, making them less popular for practical adoption. For invisible watermarks, we only consider the steganography approach, as it is much more robust and harder to attack than the signal transformation approach (Nam et al., 2021; Wang et al., 2022; Zhao et al., 2023a). 2.2 Attacks Against Watermarks To the best of our knowledge, there is no work considering the watermark forge attack. Prior efforts mainly focus on the watermark removal attack. These attack solutions can be summarized into three main categories, i.e., image inpainting methods (Ulyanov et al., 2018; Liang et al., 2021) for visible watermarks, denoising methods (Li, 2023; Zhao et al., 2023a), and disrupting methods (Nam et al., 2021; Wang et al., 2022) for invisible watermarks. However, they have several critical drawbacks in practice. Specifically, the image inpainting methods (Ulyanov et al., 2018; Liang et al., 2021) require clean images and watermarked images to train the inpainting model, which is not feasible in the real world, because the user can only obtain watermarked images from the service providers (Mid). Disrupting methods (Nam et al., 2021; Wang et al., 2022) require the user to know the details of the watermarking schemes, which is also difficult to achieve. The most promising method is based on denoising models. For instance, Li (2023) adopted guided diffusion models to purify the watermarked images and minimize the differences between the watermarked images and diffusion model’s outputs. However, using diffusion models to remove the watermark will cost a lot of time. In sum, compared to prior works, (1) we are the first to consider the watermark forge attack; (2) we build a unified framework WMaGi to realize these two types of attacks in a holistic way; (3) our watermark removal attack from WMaGi is more practical as it does not require the clean content or watermarking schemes. It is also more efficient as it brings $5,050 \sim 11,000 \times$ speedup compared to diffusion model-based attacks. 3 WMaGi: A Unified Attack Framework In this section, we first give a formal definition of the watermark verification process. Then, we introduce our threat model in the context of adversary’s power and knowledge. Finally, we introduce details of our proposed WMaGi. To the best of our knowledge, it is the first black-box watermark removal and forging method in practice. We mainly consider watermarks embedded in the generated images. Watermarks in other domains, such as language and audio, will be our future work. 3.1 Preliminary We consider a general watermarking scheme, which is widely studied in previous works (Fernandez et al., 2023; Liu et al., 2023b) and can be used to protect and regulate the AIGC. In this scheme, the service provider adopts a decoder $M_D$ to recover the embedded secret message $m$, i.e., the watermark, from the image $x$ generated by its model $M_G$. For a successful watermark verification, we give the following definition: **Definition 1** Given a threshold $\tau$, a message decoder $M_D$, and a secret message $m$, if an image $x^s$ fulfills $\text{Dis}(M_D(x^s), m) \leq f(\tau, m)$, where $\text{Dis}(\cdot, \cdot)$ is a distance metric and $f(\cdot, \cdot)$ is a function of $\tau$ and $m$, we say that $x^s$ passes the verification with respect to the secret message $m$. Otherwise, verification fails. Definition 1 is general for all types of secret message $m$. Specifically, the most popular type used in many watermarking implementations is a bit string (Fei et al., 2022; Fernandez et al., 2023; Cui et al., 2023; Zhao et al., 2023b). When $m$ is a bit string, the distance $\text{Dis}(\cdot, \cdot)$ is the Hamming Distance, and $f(\tau, m) = \tau \times |m|$, where $|m|$ represents the length of the message $m$. In this paper, we mainly consider this type of watermark message, which is the most reliable and mature method. 3.2 Threat Model **Attack Goals.** We consider a practical scenario where a service provider employs a large generative model $M_G$ to generate creative images for public users. The service provider embeds a secret user-specific watermark $m$ to each generated image. By extracting the watermark $m$ from any misused image on the Internet, the service provider is able to detect the policy violation events and attribute to the corresponding users. A malicious user can break this watermarking scheme with two distinct goals. (1) **Watermark removal attack**: the adversary receives a generated image from the service provider, which contains the secret watermark associated with him. He aims to erase the watermark from the generated image, and then use it freely without the consideration of following the service policy, as the provider is not able to identify the watermarks and track to him any more. (2) **Watermark forge attack**. The adversary tries to frame up a victim user by forging the victim’s watermark on a malicious image (from another model or created by humans). Then the adversary can distribute the image on the Internet. The service provider will attribute to the wrong user. **Adversary’s capability.** We consider the black-box scenario, where the adversary can only obtain the generated image \( x \) and has no knowledge of the employed generated model or watermark scheme. This is practical, as many service providers only release APIs for users to use their models without leaking any information about the details of the backend models \( M_G \) and \( M_D \). We further assume that all the generated images from the target service are watermark-protected, so the adversary cannot collect any clean images. These assumptions increase the attack difficulty compared to prior works. ### 3.3 Methodology Detail We introduce WMaGi to manipulate watermarks with the above goals. To overcome the black-box challenges, the adversary can adopt a pre-trained denoise model, which accepts noisy images as its inputs and returns new images containing less noise. Then he can train a GAN model to remove or forge watermarks. The details of our WMaGi, comprised of three steps, are described as follows. **Step 1: Data Collection.** The adversary collects the images \( x_i \) generated by the target service provider for the target user from the Internet. Such data collection is feasible, as people who share their created content on social media are normally using a specific account and adding tags to indicate the used service. Alternatively, the adversary can also query the service to collect the watermarked images with his account. All the collected data contain one specific watermark \( m \). This establishes a dataset \( X = \{x_i | x_i \sim (M_G, m)\} \), where \( M_G \) is the service provider’s generative model. **Step 2: Data Pre-processing.** Then, the adversary needs to modify the collected data to weaken the embedded watermarks. He can adopt a pre-trained denoise model \( H \), which can be downloaded from a public source, like Hugging Face. The adversary creates a new dataset \( \hat{X} = \{\hat{x}_i | H(x_i) = \hat{x}_i, x_i \in X\} \), and uses \( \hat{x}_i \) to approximate the clean image \( x_i \). Note that it is possible that \( H(x) \) will be visually different from \( x \). Our method does not implicitly constrain the similarity between \( H(x) \) and \( x \), making it more general. **Step 3: Model Training.** The adversary needs to build a connection between clean images and watermarked images, so that he can remove or forge a specific watermark. To find such a map, he can adopt a generative adversarial network (GAN) with \( X \) and \( \hat{X} \). There are two components in training the GAN model, i.e., generator \( G \) and discriminator \( D \). Specifically, for watermark removal, \( G \) is to modify \( x_i \), and \( D \) is to judge the distribution similarity between \( G(x_i) \) and \( \hat{x}_i \). To better study the distribution of the watermarked and clean data distributions, we adopt the Wasserstein distance (Arjovsky et al., 2017) to optimize both \( G \) and \( D \). The loss functions can be written as: \[ L_D = -E_{\hat{x} \in \hat{X}}[D(\hat{x})] + E_{x \in X}[D(G(x))] + w_D E_{\hat{x}, x \in X}[\nabla_{\alpha x + (1-\alpha)\hat{x}} D(\alpha x + (1-\alpha)\hat{x})], \] \[ L_{G_D} = -w_G E_{x \in X}[D(G(x))]. \] where \( w_D \) and \( w_G \) are weights for losses and \( \alpha \) is a random variable between 0 and 1.\(^6\) On the other hand, to guarantee the quality of generated images, we adopt several loss functions to restrict the image quality, which can be written as: \[ L_x = E_{x \in X}[L_1(G(x), x) + MSE(G(x), x) + LPIPS(G(x), x)]. \] where \( L_1 \) is the \( L_1 \)-norm, MSE is the mean squared error loss, and LPIPS is the perceptual loss (Zhang et al., 2018). For watermark forge, \( G \) is to modify \( \hat{x}_i \), and \( D \) is to judge the distribution similarity between \( G(\hat{x}_i) \) and \( x_i \). Therefore, the loss function can be written as: \[ L_D = -E_{x \in X}[D(x)] + E_{\hat{x} \in \hat{X}}[D(G(\hat{x}))] + w_D E_{\hat{x}, x \in X}[\nabla_{\alpha x + (1-\alpha)\hat{x}} D(\alpha x + (1-\alpha)\hat{x})], \] \[ L_{G_D} = -w_G E_{\hat{x} \in \hat{X}}[D(G(\hat{x}))], \] \[ L_x = E_{\hat{x} \in \hat{X}}[L_1(G(\hat{x}), \hat{x}) + MSE(G(\hat{x}), \hat{x}) + LPIPS(G(\hat{x}), \hat{x})]. \] \(^6\)We slightly modify the discriminator loss for large-resolution images to stabilize the training process. The details can be found in Appendix A. Table 1: Performance of WMaGi under different bit lengths. The number of images for the adversary is 25,000. | Bit Length | Original | Watermark Remove | Watermark Forge | |------------|----------|------------------|----------------| | | Bit Acc | FID | PSNR | SSIM | CLIP | Bit Acc | FID | PSNR | SSIM | CLIP | Bit Acc | FID | PSNR | SSIM | CLIP | | 4 bit | 100.00% | 4.2 | 27.81| 0.89 | 0.91 | 53.63% | 16.6 | 24.51| 0.86 | 0.92 | 95.70% | 17.3 | 26.70| 0.88 | 0.93 | | 8 bit | 100.00% | 10.19| 22.71| 0.73 | 0.99 | 77.68% | 20.42| 24.42| 0.83 | 0.91 | 92.49% | 21.09| 25.49| 0.85 | 0.93 | | 16 bit | 100.00% | 11.34| 22.71| 0.73 | 0.98 | 20.10% | 24.63| 23.44| 0.77 | 0.91 | 92.23% | 18.34| 25.84| 0.83 | 0.94 | | 32 bit | 99.99% | 28.76| 19.99| 0.53 | 0.96 | 53.64% | 25.33| 21.17| 0.64 | 0.91 | 90.14% | 31.13| 23.41| 0.71 | 0.93 | Table 2: Performance of WMaGi under the different number of collected images. The length of embedded bits is 8. The overall training loss for $G$ can be written as $$L_G = L_{GD} + w_x L_x,$$ where $w_x$ is a weight for the loss function. ## 4 EVALUATIONS ### 4.1 EXPERIMENT SETUP **Datasets.** We mainly consider two datasets: CIFAR-10 and CelebA (Liu et al., 2015). CIFAR-10 contains 50,000 training images and 10,000 test images with a resolution of 32*32. CelebA is a celebrity faces dataset, which contains 162,770 images for training and 19,867 for testing, resized at a resolution of 64*64 in our experiments. We randomly split the CIFAR-10 training set into two disjoint parts, one of which is to train the service provider’s model and another is used by the adversary. Similarly, we randomly pick 100,000 images for the service provider and 10,000 images for the adversary from the training set of CelebA. **Watermarking Schemes.** Considering the watermark’s expandability to multiple users, we mainly adopt the post hoc manner, i.e., adding user-specific watermarks to the generated images. We adopt StegaStamp (Tancik et al., 2020), a state-of-the-art and robust method for embedding bit strings into given images, which is proved to be the most effective watermarking embedding method against various removal attacks (Zhao et al., 2023a). We also provide two case studies to explore the prior manner, which directly generates images with watermarks For our case studies, We follow previous works (Fei et al., 2022; Zhao et al., 2023b) to embed a secret watermark to WGAN-div (Wu et al., 2018) and EDM (Karras et al., 2022), respectively. **Baselines.** To the best of our knowledge, WMaGi is the first work to remove or forge a watermark in images under a pure black-box threat model. Therefore, we consider some potential baseline attack methods under the same assumptions and attacker’s capability, i.e., having only watermarked images. These baseline methods can be classified into two groups. (1) Image transformation methods: we consider modifying the properties of the given image, such as resolution, brightness, and contrast. We also consider image compression (e.g., JPEG) and image disruptions (e.g., Gaussian blurring, adding Gaussian noise). (2) Diffusion model methods (Li, 2023): we directly adopt a pre-trained unconditional diffusion model (DiffPure (Nie et al., 2022)) to modify the given image, which does not require to train a diffusion model from scratch and does not need clean images. As the diffusion model is not trained or fine-tuned for watermark removal or forge, for consistency, we do not adopt guided diffusion models or conditional diffusion models as Li (2023) did. The results from pre-trained diffusion models are various on different datasets, which will be discussed in Appendix C. Specifically, for watermark removal, the watermarked images are inputs for the attacks; for watermark forge, the clean images are inputs for the attacks. **Implementation.** We adopt DiffPure (Nie et al., 2022) as the diffusion model used in the second step of WMaGi without any fine-tuning. As the adversary does not have any knowledge of the watermarking scheme, it is important to decide which checkpoint should be used in the attack. We provide a simple way to help the adversary select a checkpoint during the training process in Appendix B. More details about our implementations can be found in Appendix A, including all hyperparameters and used bit strings. Table 3: Results of different attacks. The bit string length is 32 bits. | Methods | Bit Acc | FID | PSNR | SSIM | CLIP | |------------------|---------|-----|------|------|------| | CenterCrop | 99.92% | 53.80| 24.97| 0.71 | 0.86 | | GaussianNoise | 100.00% | 25.09| 26.26| 0.84 | 0.86 | | GaussianBlur | 99.27% | 17.42| 28.40| 0.89 | 0.89 | | JPEG | 100.00% | 4.26 | 19.70| 0.87 | 0.95 | | Brightness | 100.00% | 4.26 | 19.70| 0.87 | 0.95 | | Gamma | 99.99% | 4.93 | 23.84| 0.93 | 0.94 | | Hue | 100.00% | 4.26 | 24.28| 0.85 | 0.95 | | Contrast | 67.82% | 7.30 | 20.61| 0.62 | 0.69 | | DMI₁ | 47.20% | 8.28 | 15.76| 0.34 | 0.67 | | DMI₂ | 51.48% | 9.93 | 20.61| 0.91 | 0.90 | Table 4: Results of different attacks. The bit string length is 48 bits. | Methods | Bit Acc | FID | PSNR | SSIM | CLIP | |------------------|---------|-----|------|------|------| | CenterCrop | 60.53% | 62.36| 23.39| 0.68 | 0.83 | | GaussianNoise | 100.00% | 35.25| 24.84| 0.81 | 0.85 | | GaussianBlur | 99.30% | 28.94| 25.87| 0.85 | 0.85 | | JPEG | 100.00% | 13.37| 18.96| 0.83 | 0.91 | | Brightness | 100.00% | 13.37| 18.96| 0.83 | 0.91 | | Gamma | 99.87% | 15.82| 24.79| 0.88 | 0.90 | | Hue | 100.00% | 13.66| 22.84| 0.82 | 0.91 | | Contrast | 71.54% | 7.86 | 20.21| 0.60 | 0.69 | | DMI₁ | 53.75% | 8.29 | 15.67| 0.33 | 0.67 | | DMI₂ | 54.56% | 10.08| 28.29| 0.88 | 0.88 | Metrics. To fairly evaluate our proposed WMaGi, we consider five metrics to measure its performance from different perspectives. To determine the quality of the watermark removal (forge) task, we adopt Bit Acc, which can be calculated as Bit Acc(m, m') = \frac{|m - H(m, m')|}{|m|} \times 100\%, where \(H(\cdot, \cdot)\) is the Hamming Distance. If Bit Acc(m, m') ≥ τ, which is defined in Definition 1, verification will pass. Otherwise, it will fail. To evaluate the quality of the images generated by WMaGi and the baselines, we adopt the Fréchet Inception Distance (FID) (Heusel et al., 2017), the peak signal-to-noise ratio (PSNR) (Horé & Ziou, 2010), and the structural similarity index (SSIM) (Horé & Ziou, 2010). Furthermore, we consider the semantic information inside the images, which is evaluated by CLIP (Radford et al., 2021). For the FID, PSNR, SSIM, and CLIP scores, we compute the results between clean images and watermarked images for the watermarking scheme, and between clean images and images after removal or forge attacks. 4.2 Ablation Study In this section, we explore the generalizability of our proposed WMaGi under the views of the length of the embedding bits and the number of collected images. In Table 1, we show the results of WMaGi at different lengths of embedded bits. The results indicate that WMaGi is robust for different secret message lengths. Specifically, when the length of the embedded bits increases, WMaGi can still achieve good performance on watermark removing or forging and make the transferred images keep high quality and maintain semantic information. In Table 2, we present the results when the adversary uses the different numbers of collected images as its training data. The results indicate that even with limited data, the adversary can remove or forge a specific watermark without harming the image quality, which proves that our method can be a real-world threat. Therefore, our proposed WMaGi has outstanding flexibility and generalizability under a practical threat model. We further prove its extraordinary few-shot generalizability for unseen watermarks in Section 4.3. 4.3 Main Results on Post Hoc Manners In this section, we focus on post hoc manners, i.e., adding watermarks to AIGC with an embedding model. Because the post hoc watermarking scheme can freely change the embedding watermarks, we evaluate WMaGi under few-shot learning to show the capability of adapting to unseen watermarks. Results on CelebA. We consider two different lengths of the embedding bits, i.e., 32-bit and 48-bit. Furthermore, we do not consider the specific coding scheme, including the source coding and the channel coding. In Tables 3 and 4, we compare the results of WMaGi and the baseline methods on the watermark removal task and the watermark forging task, respectively. We notice that the watermark embedding method is robust against various image transformations. Using image transformations cannot simply remove or forge a specific watermark in the given images. For methods using diffusion models, we consider two settings, i.e., adding large noise to the input (DMI₁) and adding small noise... | # of Samples (bit length = 32bit) | Original | Watermark Remove | Watermark Forge | |----------------------------------|----------|------------------|----------------| | | Bit Acc | FID | PSNR | SSIM | CLIP | Bit Acc | FID | PSNR | SSIM | CLIP | Bit Acc | FID | PSNR | SSIM | CLIP | | 10 | 97.98% | 4.10 | 25.00 | 0.90 | 0.96 | 97.98% | 4.10 | 25.00 | 0.90 | 0.96 | | 50 | 100.00% | 4.14 | 30.69 | 0.94 | 0.96 | 53.21% | 19.74 | 24.47 | 0.87 | 0.86 | 85.18% | 12.89 | 28.37 | 0.94 | 0.93 | | 100 | 53.27% | 14.30 | 25.51 | 0.89 | 0.87 | 93.47% | 12.43 | 26.57 | 0.92 | 0.91 | Table 5: Few-shot generalization ability of WMaGi on unseen watermarks. Figure 2: The first column is clean images. The second is watermarked images. The third is the output of DM_I. The fourth is the output of DM_S. The fifth is the output of WMaGi. The sixth is the difference between the first and second columns. The seventh is the difference between the first and third columns. The eighth is the difference between the first and fourth columns. The ninth is the difference between the first and fifth columns. The tenth is the difference between the second and fifth columns. to the input (DM_S). Especially, we use the same setting as DM_I in the second step of WMaGi to generate images. Although diffusion models can easily remove the watermark from the given images under both settings, the generated images are visually different from the input images, causing a low PSNR, SSIM, and CLIP score. Furthermore, the FID indicates that the diffusion model will cause a distribution shift compared to the clean dataset. Nevertheless, we find that DM_I and DM_S can maintain high image quality while successfully removing watermarks on other datasets, which we discuss in Appendix C. The results make us reflect on the generalizability of diffusion models on different datasets and watermarking schemes. However, evaluating all accessible diffusion models on various datasets and watermarking schemes will take months. Therefore, we leave it as future work to deeply study the diffusion models in the watermarking removal task. On the other hand, forging a specific unknown watermark is non-trivial and impossible for both image transformation methods and diffusion models. Our WMaGi gives an outstanding performance in both tasks and maintains good image quality as well. However, we notice that as the length of the embedded bit string increases, it becomes more challenging to forge or remove the watermark. That is the reason that under 48-bit length, our WMaGi has a little performance drop on both tasks with respect to bit accuracy and image quality. We provide visualization results in the following content to prove images generated by WMaGi are still visually close to the given image under a longer embedding length. More importantly, WMaGi is very time-efficient compared to diffusion model methods. We present the results in Appendix D. **Few-Shot Generalization.** In real-world applications, large companies can assign a unique watermark for every account or change watermarks periodically. Therefore, it is important to study the few-shot power of WMaGi, i.e., fine-tuning WMaGi with several new data with an unseen watermark to achieve outstanding watermark removal or forging abilities for the unseen watermark. In our experiments, we mainly consider embedding a 32-bit string into clean images. Then, we fine-tune the model in Table 3 to fit new unseen watermarks. In Table 5, we present the results under 10, 50, and 100 training data for watermark removal and forging. The results indicate that the watermark removal task is much easier than the watermark forging task. Furthermore, with more accessible data, both bit accuracy and image quality can be improved. It is worth noticing that, even with limited data, WMaGi can successfully remove or forge an unseen watermark and maintain high image quality. The results prove that our proposed method has strong few-shot generalization power to meet practical usage. **Visualization.** To better compare the image quality of WMaGi with other baselines, we show the visualization results in Figure 2. Specifically, both DM_S and DM_I will change the semantic information in inputs. WMaGi can keep the image details in the watermark removal and forging tasks. Furthermore, when comparing the differences between clean and watermarked images, we find that WMaGi can produce a similar residual as the watermark embedding model, which means that WMaGi can learn the embedding information during the training process. More results can be found in Appendix E. Table 6: Results of removing and forging content watermarks from the WGAN-div and EDM. | Methods | WGAN-div | | | EDM | | | |--------------|----------|----------|----------|-----|----------|----------| | | Bit Acc | FID | Bit Acc | FID | Bit Acc | FID | | | Original | Watermark Remove | Watermark Forge | Original | Watermark Remove | Watermark Forge | | CenterCrop | 99.99% | 100.65 | 48.72% | 41.58 | 99.99% | 100.65 | | GaussianNoise| 99.99% | 56.83 | 52.04% | 22.85 | 99.99% | 56.83 | | GaussianBlur | 98.43% | 64.09 | 52.30% | 15.12 | 98.43% | 64.09 | | JPEG | 99.66% | 60.20 | 52.25% | 0.47 | 99.66% | 60.20 | | Brightness | 99.66% | 60.59 | 52.25% | 0.63 | 99.66% | 60.59 | | Gamma | 99.55% | 63.70 | 52.17% | 1.83 | 99.55% | 63.70 | | Hue | 97.16% | 117.80 | 46.20% | 83.36 | 97.16% | 117.80 | | Contrast | 97.12% | 100.93 | 49.17% | 68.79 | 97.12% | 100.93 | | DDM | 97.12% | 100.93 | 49.17% | 68.79 | 97.12% | 100.93 | | WMaGi | 97.12% | 69.88 | 95.72% | 5.84 | 97.12% | 69.88 | 4.4 Main Results on Prior Manners In this section, we focus on prior methods, i.e., directly embedding watermarks into the generative models. We follow the previous methods (Fei et al., 2022) and (Zhao et al., 2023b) to embed a secret bit string into a WGAN-div and an EDM as a watermark, respectively. Therefore, all generated images contain a pre-defined watermark, but we cannot have the corresponding clean images. That is to say, we cannot obtain the PSNR, SSIM, and CLIP scores as previously. So, we only evaluate the FID and the bit accuracy in our experiments. Specifically, we train the WGAN-div with 100,000 watermarked images randomly selected from the training set of CelebA. We directly use the models provided by Zhao et al. (2023b), which are trained on FFHQ embedded with a 64-bit string. For WMaGi, we use the WGAN-div and EDM to generate 10,000 samples as the accessible data. In Table 6, we show the results of different attacks to remove or forge the watermark. First, we find that embedding a watermark into the generative model will cause the generated images to have a different distribution from the clean images, making the FID extremely high. Second, the EDM can generate high-quality images even under watermarking, causing a lower FID. However, we find that the embedded watermark by Zhao et al. (2023b) is less robust, which can be removed by blurring and JPEG compression. It could be because they made some trade-off between the image quality and robustness. For both, WMaGi can successfully remove and forge the specific watermark in the generated images and keep the same image quality as the generative model. The visualization results can be found in Appendix E. 4.5 Potential Defenses for Service Providers Although WMaGi is an effective method for removing or forging a specific watermark in images, there are some possible defense methods against our attack. First, large companies can assign a group of watermarks to an account to identify the identity. When adding watermarks to the images, the watermark can be randomly selected from the group of watermarks, which can hinder the adversary from obtaining images containing the same watermark. However, such a method requires a longer length of embedded watermarks to meet the population of users, which will decrease the image quality because embedding a longer watermark will damage the image. Another defense is to design a more robust watermarking scheme, which can defend against removal attacks from diffusion models. Because WMaGi requires diffusion models to remove the watermarks. The aforementioned two methods have the potential to defend against WMaGi but have different shortcomings, such as decreasing the image quality, requiring a newly designed coding scheme, and requiring a newly designed robust watermarking scheme. Therefore, WMaGi will be a threat for future years. 5 Limitations and Conclusions In this paper, we consider a practical threat to AIGC protection and regulation schemes, which are based on the state-of-the-art watermarking technology. We introduce WMaGi, a unified attack framework to effectively remove or forge watermarks over AIGC while maintaining good image quality. With WMaGi, the adversary only requires watermarked images without their corresponding clean ones, making it a real-world threat. Through comprehensive experiments, we prove that WMaGi has strong few-shot generalization abilities to fit unseen watermarks, which makes it more powerful. In Appendix F, we discuss the limitations of WMaGi for larger-resolution and more complex images. Although WMaGi can still successfully forge the watermark to some degree, the performance is not good enough for a real-world scenario. However, we believe that further improvement over WMaGi is probable with more advanced GAN structures and training strategies. In Appendix G, we discuss the social impact of our work. WMaGi brings both positive and negative impacts on the society. REFERENCES Instagram. https://www.instagram.com/. Midjourney. https://docs.midjourney.com/. Synthid. https://www.deepmind.com/synthid. Twitter. https://twitter.com/. Introducing chatgpt. https://openai.com/blog/chatgpt. Martín Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. CoRR, abs/1701.07875, 2017. Clark Barrett, Brad Boyd, Ellie Burzstein, Nicholas Carlini, Brad Chen, Jihye Choi, Amrita Roy Chowdhury, Mihai Christodorescu, Anupam Datta, Soheil Feizi, Kathleen Fisher, Tatsunori Hashimoto, Dan Hendrycks, Somesh Jha, Daniel Kang, Florian Kerschbaum, Eric Mitchell, John Mitchell, Zulfikar Ramzan, Khawaja Shams, Dawn Song, Ankur Taly, and Diyi Yang. Identifying and mitigating the security risks of generative ai. CoRR, abs/2308.14840, 2023. P. V. Sai Charan, Hrushikesh Chunduri, P. Mohan Anand, and Sandeep K. Shukla. From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads. CoRR, abs/2305.15336, 2023. Danni Cheng, Xiang Li, Weihong Li, Chan Lu, Fake Li, Hua Zhao, and Wei-Shi Zheng. Large-scale visible watermark detection and removal with deep convolutional networks. In Proc. of the PRCV, volume 11258, pp. 27–40, 2018. Yingqian Cui, Jie Ren, Han Xu, Pengfei He, Hui Liu, Lichao Sun, and Jiliang Tang. Diffusionshield: A watermark for copyright protection against generative diffusion models. CoRR, abs/2306.04642, 2023. Jianwei Fei, Zhihua Xia, Benedetta Tondi, and Mauro Barni. Supervised GAN Watermarking for Intellectual Property Protection. In 2022 IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1–6, 2022. Pierre Fernandez, Guillaume Couairon, Herv´e J´egou, Matthijs Douze, and T. Furon. The Stable Signature: Rooting Watermarks in Latent Diffusion Models. CoRR, abs/2303.15435, 2023. Bin Guo, Yasan Ding, Yueheng Sun, Shuai Ma, Ke Li, and Zhiwen Yu. The mass, fake news, and cognition security. Frontiers in Computer Science, 15(3):153806, 2021. Julian Hazell. Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns. CoRR, abs/2305.06972, 2023. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proc. of the CVPR, pp. 770–778, 2016. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proc. of the NeurIPS, pp. 6626–6637, 2017. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. In Proc. of the NeurIPS, 2020. Jonathan Ho, Tim Salimans, Alexey A. Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet. Video Diffusion Models. In Proc. of the NeurIPS, 2022. Alain Hor´e and Djemel Ziou. Image quality metrics: Psnr vs. ssim. In Proc. of the ICPR, pp. 2366–2369, 2010. Tero Karras, Samuli Laine, and Timo A¨ila. A style-based generator architecture for generative adversarial networks. In Proc. of the CVPR, pp. 4401–4410, 2019.
sRBnyzoqkU
The paper proposes a per-layer bottom and top percentile rejection of fine-tuned weights to be combined with the pre-trained model, but this proposal is not compared to any outlier-detection alternative.
MODEL BREADCRUMBS: SCALING MULTI-TASK MODEL MERGING WITH SPARSE MASKS Anonymous authors Paper under double-blind review ABSTRACT The rapid development of AI system development has been greatly influenced by the emergence of foundation models. The conventional approach involves fine-tuning these pre-trained foundation models for specific target tasks, resulting in a rapid spread of models fine-tuned across a diverse array of tasks. This paper introduces an innovative strategy termed Model Breadcrumbs, which addresses the need to merge multiple fine-tunings of the same foundation model across a spectrum of auxiliary tasks. Model Breadcrumbs consist of a sparsely defined set of weights that carve out a trajectory within the weight space of a pre-trained model, enhancing task performance when traversed. These breadcrumbs are constructed by subtracting the weights from a pre-trained model before and after fine-tuning, followed by a sparsification process that eliminates weight outliers and negligible perturbations. Our experiments demonstrate the effectiveness of combining Model Breadcrumbs to simultaneously improve performance across multiple tasks. This contribution aligns with the evolving paradigm of updatable machine learning, reminiscent of the collaborative principles underlying open-source software development, fostering a community-driven effort to reliably update machine learning models. Through extensive experimentation involving various models and tasks, we establish that integrating Model Breadcrumbs offers a straightforward, efficient, and highly effective approach for constructing multi-task models and facilitating updates to foundation models. 1 INTRODUCTION In recent years, foundational models (Bommasani et al., 2021) have become instrumental, exhibiting unprecedented efficacy across multiple domains. These models are characterized by their extensive scale, generality, and capacity to learn and generalize knowledge from vast datasets, offering promising solutions to a diverse range of problems. The inherent ability of foundational models to be fine-tuned has led to advancements in natural language understanding (NLP) (Radford et al., 2018, 2019; Devlin et al., 2018; Liu et al., 2019; Raffel et al., 2020; Lewis et al., 2019), computer vision (Radford et al., 2021; Ramesh et al., 2021; Luo et al., 2020; Kim et al., 2021; Cho et al., 2021), and other related fields (Rives et al., 2021; Yin et al., 2020; Rothchild et al., 2021). On one hand, the scalability of expanding foundational models to increase the tasks they can perform in practice poses a significant challenge as approaches such as joint training are limited in many practical scenarios (Cossu et al., 2022; Davari et al., 2022). In domains such as healthcare, stringent data privacy concerns often prohibit access to the underlying training data, even when the fine-tuned model on the said data is publicly accessible, rendering joint training infeasible (Asadi et al., 2023). Even in scenarios where access to training data is possible, the computational demands of simultaneous training on a multitude of tasks becomes restraining. On the other hand, the widespread adoption of foundational models has led to a certain homogenization in the field (Bommasani et al., 2021). Both the training approach, commonly transfer learning from a popular foundational model (Oquab et al., 2014), and the model architecture itself have become standardized, typically following a few popular foundation models. This standardization has resulted in a proliferation of publicly available fine-tuned models, all sharing the same architecture (Wolf et al., 2020). However, beyond their conventional use for model inference, these numerous fine-tuned models remain largely untapped, representing a missed opportunity (Rame et al., 2022). To address the challenges of scalability, practical constraints, and unlock the untapped potential of the growing pool of publicly available fine-tuned models, recent developments in neural network weight averaging techniques have gained attention (Izmailov et al., 2018; Neyshabur et al., 2020; Ramé et al., 2022; Ilharco et al., 2022a; Wortsman et al., 2022b; Choshen et al., 2022; Don-Yehiya et al., 2022). These approaches enable the practitioners to re-purpose and harness the increasingly valuable but underutilized resource that is the publicly available fine-tuned models. More closely aligned with our work, Ilharco et al. (2022a) introduced Task Vectors, derived from the subtraction of a fine-tuned model and its corresponding pre-trained model. A collection of these Task Vectors can be amalgamated with their respective pre-trained model to construct a multi-task model, effectively repurposing pre-existing fine-tuned models without the need for additional training or access to the original training data. However, despite their potential, the merger of Task Vectors (Ilharco et al., 2022a) encounters limitations when confronted with a large number of tasks. This is primarily due to its reliance on hyperparameter tuning through validation set performance, a process that becomes computationally prohibitive at scale. In response to these challenges and to capitalize on the untapped resources within the field, our paper introduces Model Breadcrumbs, a novel approach designed to tackle both scalability and hyperparameter generalization challenges. Model Breadcrumbs efficiently constructs multi-task models from pre-existing fine-tuned models (see Figure 1), overcoming the limitations faced by existing methods. We demonstrate that Model Breadcrumbs not only produces competitive multi-task models but also offers hyperparameters that generalize effectively as the number of tasks grows. In the next section, Section 2, we set the stage by examining related work that contextualizes Model Breadcrumbs. In Sections 3 and 4, we introduce and evaluate our framework. Finally, Section 5 offers insights into the scope and limitations of our proposed method. Our main contributions and takeaways are summarized below: 1. A novel approach to model merging and reusing the pre-existing fine-tuned models to build multi-task models, that often outperform their respective fine-tuned version. 2. We empirically show that our approach is robust towards hyperparameter perturbations, and generalizes as the number of tasks grows. 2 RELATED WORK Model Merging Recent studies in the literature have explored the merging of models trained from scratch with different initializations (Ainsworth et al., 2022; Stoica et al., 2023). One of the main challenges in this type of model merging is aligning the models before the actual merger. Therefore, research in this branch primarily focuses on finding permutations between networks to bring them into alignment with a reference model, enabling the subsequent merger of the two models in weight space. Moreover, this branch typically focuses on the merging of only two models. Our work, on the other hand, distinguishes itself from this line of research, as we concentrate on the model merging of networks that share the same initialization, specifically initialized by a pre-trained model. Furthermore, our investigation extends beyond the merging of just two models, exploring the dynamics when multiple models are involved in the merger process. Furthermore, Neyshabur et al. (2020) highlighted the benefits of linearly interpolating two fine-tuned models originating from the same pre-trained model. They showed that this technique often yields a model that outperforms both of the original fine-tuned models. This discovery sparked subsequent investigations into the merging of fine-tuned models derived from a single foundation model, exploring its potential and practical applications. Wortsman et al. (2022a) demonstrated that models fine-tuned on the same dataset with different hyperparameters can be combined together in a weighted average to yield an overall higher performing model. Unlike our work they did not consider merging models from different datasets and tasks. Choshen et al. (2022) merges models from multiple trained models in order to create a better pretrained model for downstream tasks. Unlike our work they do not demonstrate or study the creation of multi-task ability through the merging. Matena & Raffel (2022) considered merging of Figure 1: Method overview. We start with a foundational model that has undergone fine-tuning on various tasks. Next, we build a fine-tuning trajectory for each fine-tuned model by subtracting the pre-trained model weights from each of the fine-tuned models. We then, at each layer, apply a masking operation to the resulting trajectory, eliminating both outliers and small values. Finally, these masked trajectories are aggregated and combined with the reference pre-trained model to create a unified multi-task model. multiple fine-tuned models originating from the same pre-trained model, trained on diverse datasets. The merger operation combines a series of fine-tuned models using a weighted average determined by the Fisher information matrix (Myung [2003]). However, computing the Fisher information matrix, as well as finding other required hyperparameters for this approach, becomes increasingly computationally expensive as the number of models to be merged grows. Therefore, it faces challenges when applied at scale. In contrast, our approach is computationally efficient, and as we will show in Section 4, its hyperparameters exhibit the ability to generalize to a higher number of models to be merged. A closely related work to ours is the study by Ilharco et al. (2022a), in which they introduced Task Vectors. Their method involves averaging models fine-tuned for various tasks, originated from the same pre-trained model, to generate multi-task models. However, their approach necessitates a validation set for each new task, which adds complexity and computational overhead. Federated Learning The concept of initiating learning with a pre-trained model has been explored in the federated learning literature, as seen in recent works such as Nguyen et al. (2022) and Legate et al. (2023). These studies focused on a single downstream task where data is distributed across multiple clients. In their approach, each client periodically aggregates models during the training process. It’s important to note that this differs from our approach, which deals with multi-task learning involving multiple downstream tasks rather than a single task distributed across clients. 3 MODEL BREADCRUMBS FRAMEWORK The Model Breadcrumbs framework is designed to enable the construction of multi-task models from pre-existing fine-tuned foundation models without the need for further training. The core concept revolves around the identification and extraction of valuable knowledge from these models by navigating through their weight spaces. In this section, we provide an overview of obtaining and merging the Model Breadcrumbs. To initiate the creation of Model Breadcrumbs, we commence with a pre-trained foundation model and subsequently fine-tune it on multiple auxiliary tasks. Denoting the weights of the foundation model as $\theta$, after fine-tuning on a specific task $t$, the weights are transformed into $\theta'_t$. The initial step involves creating Task Vectors (Ilharco et al., 2022a) by calculating the weight differences between $\theta'_t$ and $\theta$, resulting in $\theta^d_t$. This difference encapsulates the knowledge acquired during the fine-tuning process. $$\theta^d_t = \theta'_t - \theta$$ (1) The weight differences derived in the previous step contain both large outliers, signifying considerable deviations, and insignificantly small differences that represent minor perturbations from that of the weights of the foundation model. The presence of these extremes can hinder the effectiveness of merging multiple Task Vectors \cite{Ilharco2022a}. To address this issue, we employ a sparsification process that masks out both large outliers and small differences. The masking operation is determined by a specific percentage of weights within each layer of $\theta_t^d$. The selection process relies on the absolute magnitudes of these weights. If a weight’s absolute magnitude is excessively large relative to the remaining weights in that layer, it is subject to masking (masking $\gamma$ percent of all weights in that layer). Additionally, if a weight’s absolute magnitude is relatively small, it is also subject to masking (masking $\beta$ percent of all weights in that layer). The remaining weights in $\theta_t^d$ remain unmasked. This masking procedure is represented as $m_t^{\beta,\gamma}$. The Model Breadcrumbs are constructed by applying the mask $m_t^{\beta,\gamma}$ to the Task Vectors \cite{Ilharco2022a}. We now have a set of weight differences that define a trajectory within the weight space of the foundation model. Traversing this trajectory allows us to effectively transfer the knowledge accumulated during fine-tuning across tasks, thereby enhancing performance on multiple tasks simultaneously. For a total of $T$ tasks, we assemble a multi-task model $\theta^*$ by following the trajectories defined by the Model Breadcrumbs with a specific strength parameter $\alpha$. The formation of this multi-task model can be expressed as: $$\theta^* = \theta + \alpha \sum_{t \in T} m_t^{\beta,\gamma} \cdot \theta_t^d$$ (2) 4 EXPERIMENTS In this section, we conduct a series of experiments to comprehensively evaluate the Model Breadcrumbs framework. Our experiments focus on the following key aspects: 1. **Merging Model Breadcrumbs**: We incrementally add tasks, totalling 8 in our investigation, to assess the scalability and performance of merged Model Breadcrumbs as the number of tasks increases. 2. **Generalization of Hyperparameters**: We explore how the hyperparameters introduced by Model Breadcrumbs—$\alpha$, $\beta$, and $\gamma$—generalize over the number of datasets. 3. **Effect of Scale**: We investigate the impact of the scale and complexity of the foundation models on the Model Breadcrumbs’ adaptability and robustness. 4. **Ablation Study**: We study the importance of the design choices introduced by Model Breadcrumbs for successful and competitive model merging. 4.1 DATA, METRICS, AND MORE Our experimental builds on \cite{Ilharco2022a}. In our analysis, following \cite{Ilharco2022a}, we report results in terms of normalized accuracy shown below, which is defined as the ratio between the accuracy achieved by the merged model and that attained by the fine-tuned model. $$\text{Normalized Accuracy} = \frac{\text{Accuracy of Merged Model}}{\text{Accuracy of Fine-tuned Model}}$$ (3) It is noteworthy that the fine-tuned model establishes the upper bound with a normalized accuracy value of 1. Subsequently, the concept of average normalized accuracy is introduced, representing the mean normalized accuracy across multiple tasks. We assess our findings using an extensive set of 8 datasets: Cars \cite{Krause2013}, DTD \cite{Cimpoi2014}, EuroSAT \cite{Helber2019}, GTSRB \cite{Houben2013}, MNIST \cite{LeCun2010}, RESISC45 \cite{Cheng2017}, SUN397 \cite{Xiao2010}, and SVHN \cite{Netzer2011}. For more information on the datasets see Table 1. Using the above datasets, we first fine-tune a series of CLIP models \cite{Radford2021}. In our fine-tuning process, we adopt a procedure similar to that outlined in a previous study \cite{Ilharco2022b}. Specifically, we conduct fine-tuning over 2000 iterations using a batch size of 128. We set the learning rate to 1e-5 and employ a cosine annealing learning rate schedule with 200 warm-up steps. The optimization is performed using the AdamW optimizer \cite{Loshchilov2017}, with | Dataset | Training | Validation | Testing | Number of classes | |-----------|----------|------------|---------|-------------------| | Cars | 7,330 | 814 | 8041 | 196 | | DTD | 3,384 | 376 | 1,880 | 47 | | EuroSAT | 21,600 | 2,700 | 2,700 | 10 | | GTSRB | 23,976 | 2,664 | 12,630 | 43 | | MNIST | 55,000 | 5,000 | 10,000 | 10 | | RESISC45 | 17,010 | 1,890 | 6,300 | 45 | | SUN397 | 17,865 | 1,985 | 19,850 | 397 | | SVHN | 68,257 | 5,000 | 26,032 | 10 | Table 1: Data statistics. (a) At each point, evaluation is performed over all 8 tasks considered in our study. (b) At each point, evaluation is performed only over the observed subset of tasks. Figure 2: The solid line is the averaged normalized accuracy across all evaluation points. Each data point corresponds to an experiment involving a subset of the 8 tasks under study. Notably, it is evident that the Model Breadcrumbs (with 85% sparsity), consistently outperform the Task Vectors (Ilharco et al., 2022a). Specifically, in the experiment involving all eight tasks, the Model Breadcrumbs outperform the Task Vectors by a substantial margin of 8.33%. During the fine-tuning process, we maintain the weights of the classification layer generated by CLIP’s text encoder in a frozen state. This approach ensures that we do not introduce additional learnable parameters, a strategy that has been validated in prior work (Ilharco et al., 2022b). 4.2 Merging Model Breadcrumbs In this section, we investigate the scalability and performance of merged Model Breadcrumbs as we incrementally add tasks, totalling 8 in our study, as listed in Section 4.1. Merging allows us to construct multi-task models capable of excelling across multiple tasks concurrently. This versatility is valuable both in scenarios where multiple privately fine-tuned models exist and in cases where publicly available models are utilized. It enables the utilization of existing knowledge from these models without necessitating additional training or access to more training data. We compare the Model Breadcrumbs with 85% sparsity and the recently proposed Task Vectors (Ilharco et al., 2022a) in terms of their impact on the overall model’s performance. We assess all possible task subsets, amounting to a total of $2^8$ combinations, under two settings: 1. evaluation over all 8 tasks and, 2. evaluation only on the subset of tasks that have been observed. As we can see in Figure 2a, merging Model Breadcrumbs results in superior multi-task models compared to the Task Vectors (Ilharco et al., 2022a). Furthermore, the performance gap between these two approaches increases as more tasks are observed, resulting in vastly superior multi-task models when more Model Breadcrumbs are available. In Figure 2b, we can see that for small task numbers the resulting merged model performs closely to that of the multiple fine-tuned models although the gap increases as more tasks are added. Model Breadcrumbs again prove to be more performance than Task Vectors (Ilharco et al., 2022a) in this setting. Figure 3: Validation Free Setting. For the ViT-B-32 model, we tune the hyperparameters of each method (breadcrumbs and task vectors) based on the first 1, 2, or 3 tasks and add additional tasks using those hyperparameters (validation set free). Moreover, for the ViT-L-14 model, we only tune the hyperparameters for the 1 task scenario and evaluate on the additional tasks using those hyperparameters. We observe that breadcrumbs substantially outperforms task vectors in this setting. Overall, Model Breadcrumbs consistently outperform the Task Vectors proposed by Ilharco et al. (2022a). Specifically, when evaluating these methods across the 8 tasks using their respective optimal hyper-parameters, the Task Vectors (Ilharco et al., 2022a) achieve an average normalized accuracy of 75.33%, whereas our proposed Model Breadcrumbs achieve a significantly higher accuracy of 83.00%, while having an 85% sparsity. 4.3 Validation-Free Setting In Section 4.2, we compared Model Breadcrumbs and Task Vectors (Ilharco et al., 2022a) under their respective optimal hyperparameters. These hyperparameters were fine-tuned based on model performance on the validation dataset for each subset of tasks following Ilharco et al. (2022a). However, as the number of tasks increases, the search for optimal hyperparameters becomes increasingly resource-intensive. Furthermore, the need for a validation set from each task being added can be restrictive due to privacy concerns or due to the unavailability of additional validation data. Thus we consider a new setting where hyperparameters are tuned based on a few tasks, and subsequent tasks are added using these pre-determined hyperparameters. The results are shown in Figure 3. Remarkably, our experiments reveal that the hyperparameters of Model Breadcrumbs exhibit a high degree of generalizability. Specifically, for the ViT-B-32 model when considering scenarios involving three tasks and beyond, up to the 8-task scenario, the optimal hyperparameters remain consistent. Moreover, for the ViT-L-14 model, the hyperparameters do not change beyond the 1 task scenario. This remarkable stability underscores the robustness and versatility of Model Breadcrumbs. We observe that on the other hand the approach of Ilharco et al. (2022a) can quickly collapse in performance. The practical implication of this stability in hyperparameter settings is that, in practice, we can rely on a relatively small number of tasks to determine optimal hyperparameters when applying Model Breadcrumbs to diverse multi-task learning scenarios. This simplifies the implementation process, reduces the need for extensive hyperparameter tuning, and contributes to the framework’s practicality and ease of use. In contrast, Task Vectors (Ilharco et al., 2022a) do not exhibit the same level of hyperparameter stability. Consequently, this fundamental divergence between Model Breadcrumbs and Task Vectors (Ilharco et al., 2022a) underlines the substantial advantage of Model Breadcrumbs in real-world multi-task learning scenarios. 4.4 Effect of Scale In this section, we explore the impact of using larger CLIP models on our analysis, comparing the performance of ViT-B-32, ViT-B-16, and ViT-L-14 models. For each model type, the optimal Model Breadcrumbs were found at 85% sparsity. As shown in Figure 4, the adoption of larger models significantly improves the performance of both our proposed Model Breadcrumbs method and the Task Vector (Ilharco et al., 2022a) baseline. Furthermore, as more tasks are introduced, the capacity to construct better-performing multi-task models grows, with larger-scale models demonstrating superior results. Specifically, we observe in Figure 4a, when utilizing the ViT-L-14 model and considering 8 tasks, merging Model Breadcrumbs produces a single multi-task model with an average performance that reaches 91.48% of the performance achieved by employing 8 individual fine-tuned models (i.e., one per task). The shift from 8 fine-tuned models to a single multi-task model substantially reduces inference time and compute resources, accompanied by only a minor relative loss in performance. This underscores the practical advantages of our approach. Moreover, Figure 4b highlights that the performance decline observed when merging either Model Breadcrumbs or Task Vectors (Ilharco et al., 2022a) can be significantly mitigated by adopting larger-scale models. Notably, for the ViT-L-14 model, merging Model Breadcrumbs for certain tasks can result in multi-task models that either match or surpass the performance of individual fine-tuned models. To delve deeper into this phenomenon, we conducted a closer examination of task merger for ViT-L-14, considering the two tasks scenario. Our observations reveal that: As we can see in Figure 5, when adding pairs of tasks via Model Breadcrumbs and Task Vectors (Ilharco et al., 2022a), we observe that the merger of task vectors (Ilharco et al., 2022a) from two tasks generally leads to improved performance on both tasks, resulting in a single model that is competitive and often superior to using two specialized fine-tuned models. Furthermore, for the same task pairs, Model Breadcrumbs consistently produce multi-task models that surpass their equivalent Task Vectors (Ilharco et al., 2022a) versions. Notably, Model Breadcrumbs mergers generate a higher number of multi-task models where both tasks exceeded their respective fine-tuned accuracy levels. This highlights the potential of Model Breadcrumbs not only to maintain but also to enhance task-specific performance within a multi-task framework. 4.5 Ablations In this section, we perform ablations to examine alternative design decisions within the Model Breadcrumbs method. Specifically, we explore different approaches for constructing the masking operation, namely: 1. Bottom-Weight Masking: Masking only the bottom-most smallest absolute magnitude weights per layer. 2. Top-Weight Masking: Masking only the top largest absolute magnitude weights per layer. We compare these alternatives to the full Model Breadcrumbs approach, which encompasses both (1) and (2), as well as the Task Vectors (Ilharco et al., 2022a) method, which lacks any masking. Our goal is to assess the impact of these design choices on the performance of the merged multi-task models derived from these approaches. In our investigation, we conduct a grid search to identify Figure 4: Comparative performance analysis of Model Breadcrumbs and Task Vector (Ilharco et al., 2022a) methods across varying CLIP model scales (ViT-B-32, ViT-B-16, and ViT-L-14) as the number of tasks increases. The solid line represents the averaged normalized accuracy across all evaluation points. Each data point corresponds to an experiment involving a subset of the 8 tasks under study. Our findings highlight the potential of larger-scale models to mitigate performance degradation and, as seen in Figure 4b, the capability of Model Breadcrumbs to produce multi-task models that surpass individual fine-tuned models for specific tasks. the optimal hyperparameters for each of the four configurations. We assess the resulting multi-task models on 8 tasks discussed in Section 4.1. The results are shown in Figure 6. Our findings reveal two key insights: (i) both forms of weight masking, as employed in Model Breadcrumbs, are essential for achieving competitive performance. Model Breadcrumbs, which combines both bottom and top weight masking, emerges as the most effective approach. (ii) The grid search for hyperparameters within the Model Breadcrumbs approach yields a higher distribution of high-performance multi-task models compared to the other three settings. Furthermore, there is much lower variation in the overall performance distribution of the multi-task models produced by the Model Breadcrumbs. These observations underscore the robustness of Model Breadcrumbs to variations in hyperparameter settings, further enhancing its practicality and reliability in real-world applications. 5 CONCLUSIONS In this paper, we introduced Model Breadcrumbs, a novel approach that addresses the challenge of constructing multi-task models from pre-existing fine-tuned foundation models. Our method effi- Figure 5: Comparative analysis of Model Breadcrumbs and Task Vectors ([Ilharco et al., 2022a]) in the merger of task pairs, revealing improved accuracy on both tasks and a higher frequency of multi-task models surpassing individual fine-tuned accuracy levels when employing Model Breadcrumbs. Figure 6: Performance comparison of the Model Breadcrumbs against alternative masking choices, reveals: (1) Model Breadcrumbs yields a higher distribution of high-performance multi-task models, underlining its robustness towards hyperparameter perturbations. (2) Model Breadcrumbs produces the highest performing multi-task model. The number on top of each violin indicates the performance of the highest performing model of that setting. cienly leverages weight differences between a foundation model and its various fine-tuned versions to create guiding trajectories within the weight space of the pre-trained model, leading to high-performing multi-task models when the trajectories are traversed. Through extensive experimentation, we have demonstrated the effectiveness of Model Breadcrumbs in simultaneously enhancing performance across multiple tasks. Notably, our approach exhibits stable and generalizable hyperparameters, simplifying its implementation and rendering it highly practical for real-world multi-task learning scenarios. Furthermore, our exploration of model scale has revealed that Model Breadcrumbs benefits from larger-scale models, closing the performance gap between merged models and individual fine-tuned models. While Model Breadcrumbs presents a promising approach for constructing multi-task models, it does come with certain limitations. Its performance can still be affected by the quality of the fine-tuned models used as the starting point. If the fine-tuning process leads to models with poor generalization or severe overfitting, Model Breadcrumbs may inherit these issues. Future research efforts can focus on addressing the potential limitations related to the quality of fine-tuned models. Moreover, currently, Model Breadcrumbs assigns the same weight when averaging over multiple trajectories, but more sophisticated aggregation techniques could be investigated to improve performance. In conclusion, Model Breadcrumbs offers a straightforward, efficient, and highly effective approach for constructing multi-task models, capitalizing on the abundance of the publicly available fine-tuned models derived from a select few foundation models. Additionally, it facilitates updates to foundation models, aligning with the evolving paradigm of updatable machine learning and fostering community-driven efforts for model refinement. We anticipate that our approach will contribute to the development of more efficient and scalable multi-task learning solutions in the future. REFERENCES Samuel K Ainsworth, Jonathan Hayase, and Siddhartha Srinivasa. Git re-basin: Merging models modulo permutation symmetries. *arXiv preprint arXiv:2209.04836*, 2022. Nader Asadi, MohammadReza Davari, Sudhir Mudur, Rahaf Aljundi, and Eugene Belilovsky. Prototype-sample relation distillation: Towards replay-free continual learning. In *International Conference on Machine Learning*, pp. 1093–1106. PMLR, 2023. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. *arXiv preprint arXiv:2108.07258*, 2021. Gong Cheng, Junwei Han, and Xiaoqiang Lu. Remote sensing image scene classification: Benchmark and state of the art. *Proceedings of the IEEE*, 105:1865–1883, October 2017. Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. Unifying vision-and-language tasks via text generation. In *International Conference on Machine Learning*, pp. 1931–1942. PMLR, 2021. Leshem Choshen, Elad Venezian, Noam Slonim, and Yoav Katz. Fusing finetuned models for better pretraining. *arXiv preprint arXiv:2204.03044*, 2022. M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, , and A. Vedaldi. Describing textures in the wild. In *Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)*, 2014. Andrea Cossu, Tinne Tuytelaars, Antonio Carta, Lucia Passaro, Vincenzo Lomonaco, and Davide Bacciu. Continual pre-training mitigates forgetting in language and vision. *arXiv preprint arXiv:2205.09357*, 2022. MohammadReza Davari, Nader Asadi, Sudhir Mudur, Rahaf Aljundi, and Eugene Belilovsky. Probing representation forgetting in supervised and unsupervised continual learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 16712–16721, 2022. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, and Leshem Choshen. Cold fusion: Collaborative descent for distributed multitask finetuning. *arXiv preprint arXiv:2212.01378*, 2022. Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. *IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing*, 2019. Sebastian Houben, Johannes Stallkamp, Jan Salmen, Marc Schlipsing, and Christian Igel. Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark. In *International Joint Conference on Neural Networks*, 2013. Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. Editing models with task arithmetic. *arXiv preprint arXiv:2212.04089*, 2022a. Gabriel Ilharco, Mitchell Wortsman, Samir Yitzhak Gadre, Shuran Song, Hannaneh Hajishirzi, Simon Kornblith, Ali Farhadi, and Ludwig Schmidt. Patching open-vocabulary models by interpolating weights. *Advances in Neural Information Processing Systems*, 35:29262–29277, 2022b. Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. *arXiv preprint arXiv:1803.05407*, 2018. Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convolution or region supervision. In *International Conference on Machine Learning*, pp. 5583–5594. PMLR, 2021.
bDooTVT4t2
In Theorem 3.2, your certification using the p-norm of $\frac{\delta_i}{\sigma_i},$ but it seems that $\frac{\delta_i}{\sigma_i}$, is a one-dimension scalar as $\delta_i$ is the i-th dimension of perturbation $\delta$. Furthermore, this theorem is seemingly a direct corollary from Theorem 3.1, because your certification divides the variance $\simga_i$ for each $\delta_i$ (not anisotropic anymore?).
Universally Amplifying Randomized Smoothing for Certified Robustness with Anisotropic Noise Anonymous authors Paper under double-blind review Abstract Randomized smoothing has achieved great success for certified adversarial robustness. However, existing methods (especially the theory for certification guarantee) rely on a fixed i.i.d. noise distribution for all dimensions of the data (e.g., all the pixels in an image), and may result in limited performance of certified robustness. To address this limitation, we propose UCAN: a novel technique that Universally amplifies randomized smoothing for Certified robustness with Anisotropic Noise. It can theoretically transform any randomized smoothing method with isotropic noise to ensure certified robustness based on different variants of anisotropic noise. The theories universally work for using different noise distributions against different $\ell_p$ perturbations. Furthermore, we also design a novel framework with three example noise parameter generators (NPGs) for customizing the anisotropic noise. Finally, experimental results demonstrate that UCAN significantly outperforms the state-of-the-art (SOTA) methods, e.g., the certified accuracy can be improved by up to 182.6% at large certified radii on MNIST, CIFAR10, and ImageNet datasets. 1 Introduction Deep learning (DL) models have been proven to be vulnerable to well-crafted adversarial perturbations (Goodfellow et al., 2015; Carlini & Wagner, 2017). To protect DL models against adversarial attacks, defense methods with certified robustness guarantees are desired. Recently, randomized smoothing methods (Lecuyer et al., 2019; Teng et al., 2020; Cohen et al., 2019) were proposed to provide efficient certified robustness to any classifier and become the state-of-the-art. By injecting noise into the training and inference phases, randomized smoothing turns any classifier into a smoothed classifier. Theoretical bounds were then derived based on the noise distributions used for randomized smoothing. For example, Cohen et al. (2019) derives a tight $\ell_2$ certified radius for adopting the Gaussian noise to smoothen the classifier where the same distribution is used for all the data dimensions (called “isotropic noise”). More recently, some works (Zhang et al., 2020; Yang et al., 2020; Hong et al., 2022) improve the certified robustness performance of randomized smoothing by seeking better isotropic noise distributions, e.g., hyperbolic secant distribution (Hong et al., 2022), Pareto distribution (Yang et al., 2020), or general exponential distribution (Zhang et al., 2020). However, most existing randomized smoothing methods cannot provide the best possible certification since their theories for certified robustness guarantees are limited to isotropic noise distributions. To unlock the certification guarantee with anisotropic noise-based randomized smoothing, there are two major challenges. The first challenge is how to develop novel theories universally for anisotropic noise distributions, in which noises with different means and variances are assigned to different data dimensions. The second challenge is how to assign proper means and variances for different data dimensions to optimize the certification performance. In this paper, we propose a novel technique to Universally amplify randomized smoothing for Certified robustness with Anisotropic Noise (“UCAN”). Specifically, we first propose a universal theory that can convert any randomized smoothing-based certification with isotropic noise into certification with anisotropic noise (see Table 1). Second, we propose three different methods under a unified framework to customize the anisotropic noise distributions for different data dimensions. Thus, UCAN can universally amplify the certification performance of all the existing randomized smoothing methods. In summary, UCAN makes the following key contributions to certified robustness: 1) **First Universal Theory for certification with Anisotropic Noise.** To our best knowledge, we take the first step to propose a universal theory for certifying the robustness of randomized smoothing with any (anisotropic) noise distribution. This new theory universally supports all the existing (and future) randomized smoothing methods using anisotropic noises for certification and against various $\ell_p$ perturbations (e.g., $\ell_1$, $\ell_2$, ..., $\ell_\infty$). 2) **Novel Noise Parameter Generators (NPGs) for Customizing Anisotropic Noise.** We also design three NPGs (including two novel neural networks) to efficiently customize the element-wise hyper-parameters (mean and variance) in the anisotropic noise distributions for all the data dimensions. They significantly amplify the certification from different aspects. 3) **Significantly Boosted Certification.** Experimental results on benchmark datasets demonstrate that UCAN drastically outperforms the SOTA randomized smoothing-based certified robustness methods. For instance, the certified accuracy can be improved by 142.5%, 182.6%, and 121.1% over the SOTAs on MNIST, CIFAR10, and ImageNet, respectively. ## 2 RELATED WORK **Randomized Smoothing.** It was first studied by Lecuyer et al. (2019) based on the Differential Privacy theory (Dwork, 2006). Simultaneously, the first tight guarantee was proposed by Cohen et al. (2019), in which, the smoothed classifier’s prediction (via Gaussian noise) can be tightly guaranteed to be consistent within a $\ell_2$ certified radius. Later, a series of methods have been proposed to guarantee the robustness against different $\ell_p$ perturbations with different noise distributions, e.g., Teng et al. (2020) derived the certified radius for $\ell_1$ perturbations with Laplace noise, and Lee et al. (2019) derived the certified radius against $\ell_0$ perturbations with uniform noise. Another line of methods (Zhang et al., 2020; Yang et al., 2020; Hong et al., 2022) proposed unified theories to guarantee the robustness against a diverse set of $\ell_p$ perturbations with different noises. However, all these methods (especially the theories) are limited to adopting isotropic noises for randomized smoothing. **Data-Dependent Randomized Smoothing.** It aims to improve the certified robustness by optimizing the noise distribution for different inputs (but still based on isotropic noise distributions due to the lack of theories for randomized smoothing with anisotropic noise). For example, Alfarra et al. (2020) optimized the variance parameter in Gaussian distribution via the gradient of the certified radius w.r.t. the variance. Sükenik et al. (2021) considered the variance as a function of the input, and models the relationship between them. Wang et al. (2020) selected a proper variance by grid-search. **Anisotropic Randomized Smoothing.** Recently Eiras et al. (2022) proposed a theorem for anisotropic randomized smoothing based on Lipschitz theories under a specific setting. However, its theory is based on assumptions that the networks are $L$-Lipschitz continuous and thus the universality is relatively limited. Furthermore, UCAN significantly outperforms Eiras et al. (2022) as shown in our experiments. ## 3 UCAN: THEOREM AND METRIC ### 3.1 RANDOMIZED SMOOTHING WITH ISOTROPIC NOISE FOR CERTIFIED ROBUSTNESS We study the classification problem that maps input from $\mathbb{R}^d$ to classes $\mathcal{C}$. Given any base classifier $f$, existing randomized smoothing turns $f$ into a “smoothed” classifier $g$ with the isotropic noise. Given the noise $\epsilon \in \mathbb{R}^d$ from any isotropic distribution $\phi$, and let $X = x + \epsilon$, the smoothed classifier can be defined as: $g(x) = \arg\max_{c \in \mathcal{C}} \mathbb{P}(f(X) = c)$. Then existing theorems can be summarized as: **Theorem 3.1 (Certified Robustness via Randomized Smoothing with Isotropic Noise).** Given a smoothed classifier $g$ based on arbitrary standard classifier $f$. For a specific $x \in \mathbb{R}^d$, let $X = x + \epsilon$, if the top-1 class is $c_A \in \mathcal{C}$, the lower bound probability of the top-1 classes $p_A \in [0, 1]$, and the upper bound probability of other classes $\overline{p_B} \in [0, 1]$ satisfies: $$\mathbb{P}(f(X) = c_A) \geq p_A \geq \overline{p_B} \geq \max_{c \neq c_A} \mathbb{P}(f(X) = c)$$ (1) then the prediction of \( g \) on the perturbed input \( x + \delta \) will consistently be \( c_A \) if \( ||\delta||_p < R(p_A, p_B) \), where \( ||\cdot||_p \) denotes the \( \ell_p \)-norm, and \( R(\cdot) \) denotes a general function of certified radius formulas. The certified radius formula varies when the noise PDFs are different. We list several certified radius functions of existing randomized smoothing theorems with isotropic noise in Table 1 (left 5 columns). The Alternative Lebesgue Measure will be introduced as an anisotropic measure for anisotropic RS in Section 3.2. ### 3.2 Theorem for Anisotropic Noise In this section, we establish a universal theory for the certification via randomized smoothing with anisotropic noise. Given any isotropic randomized smoothing methods, our method can universally transform them to anisotropic randomized smoothing for certified robustness. Specifically, given an arbitrary isotropic noise with zero-mean and the scale parameter \( \lambda \) (w.r.t. the variance of the PDF) in each dimension, we can represent any anisotropic noise with the anisotropic mean offsets \( \mu = [\mu_1, \mu_2, ..., \mu_d] \) and the anisotropic multipliers of the scale parameter \( \sigma = \text{diag}(\sigma_1, \sigma_2, ..., \sigma_d) \) with \( \sigma_i \) for the \( i \)-th dimension, and \( \sigma_i > 0 \). Then, the \( i \)-th dimension of the anisotropic noise has the mean \( \mu_i \) and scale parameter \( \sigma_i \lambda \), where \( \lambda \) is multiplied by \( \sigma_i \). Formally, denoting any isotropic noise as \( \epsilon \) (generated by \( \lambda \)), we define the respective anisotropic noise \( \epsilon' \) as: \[ \epsilon' = \epsilon^\top \sigma + \mu \] Then robustness of randomized smoothing with anisotropic noise can be ensured per Theorem 3.2. **Theorem 3.2 (Anisotropic Randomized Smoothing via Universal Transformation).** Let \( f : \mathbb{R}^d \to C \) be any deterministic or random function. Suppose that for the multivariate random variable with isotropic noise \( X = x + \epsilon \) in Theorem 3.1, the certified radius function is \( R(\cdot) \). Then, for the corresponding anisotropic input \( Y = x + \epsilon' \sigma + \mu \), if there exist \( c'_A \in C \) and \( p'_A, p'_B \in [0, 1] \) such that: \[ P(f(Y) = c'_A) \geq p'_A \geq p'_B \geq \max_{c \neq c'_A} P(f(Y) = c) \] then \( g'(x + \delta') \equiv \arg \max_{c \in C} P(f(Y + \delta') = c) = c'_A \) for all \( ||\delta'||_p \leq (\sum_i (\delta'_i / \sigma_i)^p)^{1/p} \leq R(p'_A, p'_B) \) where \( g' \) denotes the smoothed classifier based on anisotropic noise, \( \delta' \) the perturbation on \( x \), and \( i \) the dimension index. **Proof.** See the detailed proof in Appendix A. In Table 1, we derive the corresponding certified radius functions of anisotropic noise-based randomized smoothing methods, transformed from most of the existing randomized smoothing methods with isotropic noise (derived based on Theorem 3.2). For other randomized smoothing methods (e.g., Zhang et al. (2020); Hong et al. (2022)) without explicit certified radius functions, our transformation method can be directly applied to the numerical certified radius result. We also present the binary case of Theorem 3.2 (binary classifier) in Appendix B. In Figure 1(a), we illustrate the benefits that the anisotropic noise can bring to randomized smoothing. In Theorem 3.2, we observe that the mean offset \( \mu_i \) does not affect the derivation of the certified robustness with anisotropic noise. Thus, it is likely that the probabilities \( p'_A \) and \( p'_B \) can be... Figure 1: (a) The benefits of anisotropic noise over isotropic noise. When evaluating \( x \) over smoothed classifiers, the decision regions of the base classifier \( f \) are denoted in different colors. The dashed lines are the level sets of the noise distribution. The left figure shows the randomized smoothing with isotropic Gaussian noise \( N(0, \lambda^2 I) \) in [Cohen et al., 2019], whereas the right figure illustrates the randomized smoothing with anisotropic Gaussian noise \( N(\mu, \Sigma) \), where \( \Sigma = \lambda^2 \text{diag}(\sigma_1^2, \sigma_2^2, ..., \sigma_d^2) \). The anisotropic noise can improve the certified robustness by improving the gap of \( p_A \) and \( p_B \). (b) Illustration of the robustness region constructed with isotropic (blue) and anisotropic (green) noise. improved by a proper mean offset in the anisotropic noise. Also, with the heterogeneous variance, the anisotropic noise can fit different dimensions of the input better without over-distortion. **Boundary for Certified Region.** In Corollary 3.3, we derive the certified radii \( R' \) for anisotropic randomized smoothing in the formation of isotropic \( \ell_p \)-ball, i.e., \( ||\delta'||_p \leq R' \). However, certified radii might be inaccurate for evaluating the certified robustness under anisotropic circumstances due to the asymmetric shape of the robustness region constructed by the anisotropic noise. In other words, the certified area represented by the isotropic \( \ell_p \)-ball is a subset of anisotropic certified region depicted by \( (\sum_i^d (\frac{\delta'_i}{\sigma_i})^p)^{\frac{1}{p}} \leq R(p_A', p_B') \) (see Figure 1(b) for illustration). **Corollary 3.3.** For the anisotropic input \( Y \) in Theorem 3.2 if the condition in Eq. (3) is satisfied, then \( g'(x + \delta') \equiv \arg \max_{c \in C} P(f(Y + \delta') = c) = c'_A \) for all \( ||\delta'||_p \leq R' \) such that \[ R' = \min\{\sigma_i\} R \] where \( R \) is the corresponding certified radius of randomized smoothing via isotropic noise, and \( \min\{\cdot\} \) denotes the minimum entry. **Proof.** The guarantee in Theorem 3.2 holds for \( ||\frac{\delta'}{\sigma_i}||_p \leq R \). Since \( ||\frac{\delta'}{\sigma_i}||_p \leq ||\frac{\delta'}{\min\{\sigma_i\}}||_p \), if \( ||\frac{\delta'}{\min\{\sigma_i\}}||_p \leq R \), the guarantee still holds. This requires \( ||\delta'||_p \leq \min\{\sigma_i\} R \). Besides the \( \ell_p \) radii, we also introduce a general metric, i.e., Alternative Lebesgue Measure (ALM) [Eiras et al., 2022], which is defined as \( ALM = \sqrt[d]{\prod_{i=1}^d \sigma_i} R \) for auxiliary evaluation on the anisotropic robustness region (note that \( \ell_p \) radius fall short to evaluate the robustness gain in some dimensions due to its symmetry, see Figure 1(b)). ALM is equivalent to the certified radii when applied to traditional isotropic RS where \( \sigma_i = 1 \) for all dimensions. See [Eiras et al., 2022] and Appendix C for detailed theoretical analysis and discussions about the ALM. 4 CUSTOMIZING ANISOTROPIC NOISE With Theorem 3.2 it is feasible to assign heterogeneous noise parameters to different data dimensions. However, finding more optimal heterogeneous noise parameters rather than randomly assigning them remains a challenge. To this end, in UCAN, we design a unified framework to customize anisotropic noise for randomized smoothing (see Figure 2(a)), which integrates three noise parameter generators (NPGs) with different scales of trainable parameters (see Figure 2(b)) and optimality: 1) Customizing **pattern-fixed anisotropic noise** by assigning heterogeneous noise parameters based on certain patterns of the data. The NPG is training-free and inference-free, but with low optimality. 2) Customizing **universal anisotropic noise** by optimizing trainable noise parameter generator according to a specific dataset. It requires the pre-training of the NPG (a neural network) and one-time inference for the dataset, with moderate optimality. 3) Customizing input-dependent anisotropic noise by optimizing the input-dependent noise parameter generator during training. It requires the pre-training of the NPG (a neural network) and one-time inference for each input, but with high optimality. Note that these three different types of NPGs are only example methods. Other methods can also be designed with different objectives. The practical algorithms are detailed in Appendix E. 4.1 Pattern-Fixed Anisotropic Noise The NPG for pattern-fixed anisotropic noise is motivated by the intuitive understanding that different portion of the data (e.g., areas on the image) may influence the prediction variability (Gilpin et al., 2018). Typically, an image’s center, where an object likely exists, may include more visual information than its borders, necessitating lower variance in the center to avoid blurring informative areas; while the borders can tolerate larger noise (e.g., higher variance) without notably hindering classification performance. Hence, this NPG assigns fixed spatial patterns for anisotropic noise. Specifically, let the spatial distribution of the variances follow a function $\sigma(a, b)$, where $a$ and $b$ are the pixel’s coordinates of the horizon and vertical axes such that the center of the image is denoted as $(a, b) = (0, 0)$. As discussed above, the central variance can be intuitively set to be smaller than the border’s variance. Thus, we design three different types of spatial distribution as follows: $$\sigma(a, b) = \kappa ||(a, b)||_p^2 + \iota, \quad p = 1, 2, \infty$$ where $|| (a, b) ||_p$ denotes the $\ell_p$-norm distance between $(a, b)$ and $(0, 0)$, $\kappa$ is a constant parameter tuning the overall magnitude of the variance, $\iota$ denotes the variance of a center pixel since $\sigma(0, 0) = \iota$, and $\iota > 0$ such that $\sqrt{\prod_{(a, b)} \sigma(a, b)} \neq 0$. Figure 3 shows three examples of the spatial distribution when $p = 1, 2, \infty$, where $\kappa$ and $\iota$ are set as 1. The noise mean is set as 0 to avoid unnecessary deviation in the images. The pattern-fixed anisotropic noise is an intuitive improvement by considering the different contributions to the classification results by different portions of the data (e.g., pixels). However, it is not sufficiently fine-tuned especially considering the diverse characteristics of different datasets. In the next subsection, we propose an automated approach that can derive the near-optimal spatial distribution of the variances for the anisotropic noise for each dataset. 4.2 Universal Anisotropic Noise This NPG leverages a fix-input neural network generator (Creswell et al., 2018) to learn anisotropic variances during the robust training (i.e., training with noise), which is shown in Figure 2(b). Specifically, the generator takes a fixed constant as the input and outputs an anisotropic $\sigma$ tensor and $\mu$ tensor with each element $\mu_i$ and $\sigma_i$ denoting the mean offset and the multiplier of noise (viz. variance) per pixel, respectively. Given any noise $\epsilon$ drawn from any noise distribution, the anisotropic noise can be generated as $\epsilon' = \epsilon \cdot \sigma + \mu$. Then, the noise will be added to the input for randomized smoothing. Architecture. We adapt the generator architecture (a multi-layer perception) in Generative Adversarial Network (GAN) (Goodfellow et al., 2020) to design a novel neural network generator. Different from GAN, this NPG does not depend on the input data but depends on the entire dataset. Therefore, we fixed the input as constants. The NPG consists of 5 linear layers and the first 4 of them are followed by activation layers. The output will be transformed by a hyperbolic tangent function with an amplification factor $\gamma$, i.e., $\gamma \tanh(\cdot)$. This amplified hyperbolic tangent layer is to limit the value of the variances since an infinite value in the noise parameters will fail the training. **Loss Function.** To train the NPG towards a desired convergence, we need to design a proper loss function. Note that certified robustness with anisotropic noise can be measured by the ALM in a general way. Then, the NPG training should aim to maximize the product $\sqrt{\prod \sigma}$ and the traditional certified radius $R$. To increase the product, we alternatively maximize the average $\sigma$ since maximizing the product could lead to unstable training. Since the certified radius is a function of $p_A$, improving the prediction accuracy over the noise can improve the certified region (measured by ALM), which is also the goal of training the smoothed classifier. The loss function can be formally defined as: $$\mathcal{L}(\theta_f, \theta_g) = -\frac{1}{d} \sum_i (\sigma_i(\theta_g)) - \sum_{k=1}^{N} y_k \log \hat{y}_k(x + \epsilon^\top \sigma(\theta_g) + \mu(\theta_g), \theta_f, \theta_g)$$ where $\theta_f$ and $\theta_g$ denote the model parameters of the classifier and parameter generator, respectively, $k$ denotes the prediction class, $N$ represents the total number of classes, $y_k$ denotes the label of input $x$, and $\hat{y}_k$ is the prediction of $y_k$. The training of the NPG for $\mu$ is also guided by the smoothing loss to improve the prediction over the dataset. **Universality.** The universality of noise and that of our robustness are different: the former focuses on using one noise to universally protect an entire dataset, and the latter focuses on defending against a wide range of perturbations (e.g., different $\ell_p$-norms) with various anisotropic noise distribution. ### 4.3 INPUT-DEPENDENT ANISOTROPIC NOISE Although the universal anisotropic noise optimizes the noise parameters during training to provide better robustness guarantees, it is still not sufficiently fine-tuned to fully capture the heterogeneity between different input samples. Since the certification is also an input-dependent process that provides specific guarantees for different inputs (the guarantee is only valid on the corresponding certified input), it is intuitive to generate the best anisotropic noise for each data sample. Therefore, we design an input-dependent NPG to produce the anisotropic noise, which considers additional heterogeneity between different inputs besides the heterogeneity between the data dimensions. Different from generating the universal anisotropic noise, the parameter generators for mean and variance are not in parallel, but cascaded (see Figure 2(b)). Specifically, in the training and certification, the mean parameter generator takes the input $x$ and returns a $\mu$ map. Then the variance parameter generator takes $x + \mu$ as the input to returns a $\sigma$ map (scale parameter of the noise). After that, the noisy input will be injected to the base classifier to train the smoothed classifier. **Architecture.** This NPG learns the mapping from the image to the $\mu$ and $\sigma$ maps, which is similar to the function of neural networks in image transformation. Hence, inspired by the image super-resolution (Zhang et al., 2018), we also adapt the “dense blocks” (Huang et al., 2017) as the main architecture to design the NPG. The details for the architecture can be found in Appendix E. **Loss Functions.** The loss function is similar to that used for the universal anisotropic noise, but the NPG takes $x$ as input and outputs $\mu$ and $\sigma$. ## 5 EXPERIMENTS We comprehensively evaluate UCAN in this section. Specifically, in Section 5.1 we test UCAN with the three different ways of generating anisotropic noises and compare them with the randomized smoothing baseline with isotropic noises. In Section 5.2 we thoroughly evaluate the universality of UCAN, including the universality on noise distributions and against different $\ell_p$ perturbations. In Section 5.3 we benchmark the best performance of UCAN with the SOTA RS methods. **Metrics.** We derive the certified accuracy per the Alternative Lebesgue Measure (ALM) (Eiras et al., 2022), which can be defined as the fraction of the test set that is certified to be consistently correct within the ALM (certified region). Formally, certified accuracy w.r.t. certified radius and ALM can be defined as: \[ \text{Acc}(R) = \frac{1}{N} \sum_{j=1}^{N} 1[g'(x_j + \delta) = y_j], \forall ||\delta||_p \leq R \] and \[ \text{Acc}(\text{ALM}) = \frac{1}{N} \sum_{j=1}^{N} 1[g'(x_j + \delta) = y_j], \forall ||\delta||_p \leq \sqrt[p]{\prod_i \sigma_i} R, \] respectively, where \( x_j \) and \( y_j \) denote the \( j \)-th sample and its label in the test set, respectively. \( N \) denotes the number of inputs/images in the test set. To fairly position our methods, when compared to the SOTA methods (mostly isotropic), we present the certified accuracy w.r.t. both the \( \ell_p \) radius and the ALM. **Experimental Settings.** We test UCAN on three image datasets: MNIST (LeCun et al., 2010), CIFAR10, (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). Following Cohen et al. (2019) on certification, we obtain the certified accuracy on the entire test set in CIFAR10 and MNIST while randomly picking 500 samples in the test set of ImageNet; we set \( \alpha = 0.001 \) and the numbers of Monte Carlo samples \( n_0 = 100 \) and \( n = 100,000 \). More details about the experimental settings can be found in Appendix I. ### 5.1 Comparison of Randomized Smoothing with Anisotropic and Isotropic Noise We first evaluate randomized smoothing with anisotropic noise generated by the three example NPGs. W.l.o.g., we adopt Gaussian distribution (with zero-mean) to generate the anisotropic noise for randomized smoothing against \( \ell_2 \) perturbation and compare with the isotropic Gaussian baseline (Cohen et al., 2019), which derives the tight certified radius (under multi-class setting) against \( \ell_2 \) perturbations. Other distributions against different \( \ell_p \) perturbations (universality of UCAN) are detailed in Section 5.2. **Parameter Setting.** We follow Cohen et al. (2019) to set different variances for isotropic Gaussian noise. For our pre-assigned anisotropic noise, since the variance varies in different dimensions, we re-scale \( \sigma(a, b) \) such that \( \sqrt[p]{\prod_i \sigma_i} = 1.0 \). For the universal method, we set \( \gamma = 5 \) for MNIST and CIFAR10 and \( \gamma = 2 \) for ImageNet to achieve the best trade-off. After training, \( \sqrt[p]{\prod_i \sigma_i} \) for MNIST, CIFAR10, and ImageNet are 1.56, 0.93, and 0.92, respectively. For the input-dependent method, the amplified factor \( \gamma \) in the parameter generator is set as 1.0 for all the datasets. **Experimental Results.** The experimental results are presented in Figure 4(a)–4(c). It shows that the certified accuracy of pattern-fixed anisotropic noise is strictly above the baseline with variance \( \lambda = 1.0 \) on all the datasets. Note that when \( \sqrt[p]{\prod_i \sigma_i} = 1.0 \), the Alternative Lebesgue Measure is equal to \( R \). Therefore, it suggests that with the same level of variance, the anisotropic noise can achieve higher prediction accuracy since the key parts of the images (showing the objects) were less perturbed (see visualized examples in Figure 7(b)). Figure 4(d)–4(f) shows that the universal anisotropic noise significantly boosts the certified accuracy on CIFAR10 and achieves the best trade-off between certified accuracy and Alternative Lebesgue Measure (certified region). The certified accuracy is improved up to 39%. Figure 4(g)–4(i) shows that the input-dependent anisotropic noise significantly boosts the certified accuracy. The best improvement of the certified accuracy is 54%, and 35% for CIFAR10, and ImageNet, respectively, since the noise parameter generator (NPG) learns the best spatial distribution of noise parameters (see the examples in Figure 7(d)). ### 5.2 Universality (Different Noise PDFs against Different \( \ell_p \) Perturbations) In this section, we evaluate the universality of UCAN over different noise distributions against different \( \ell_p \) perturbations. Specifically, we evaluate the universality over noise distributions listed in Table II. For a fair comparison, we follow Yang et al. (2020) to set the scalar parameter \( \lambda \) of different noise distributions such that the variance is equal to 1. For the anisotropic noise, we follow the settings in Section 5.1 to set the parameters for pattern-fixed, universal, and input-dependent anisotropic noise. We only present the \( \ell_2 \) pattern of the pattern-fixed anisotropic noise due to the similar performance of different patterns. We only present the certified defenses against \( \ell_1 \) and \( \ell_\infty \) perturbations due to lack of existing works against \( \ell_2 \) perturbations in Table I (the comparison with Cohen et al. (2019) against \( \ell_2 \) perturbations has been given earlier). The experimental results are shown in Figure 5. In all settings, UCAN can universally amplify the certified robustness of randomized smoothing with isotropic noise. Figure 4: Comparison of randomized smoothing with anisotropic noise and that with isotropic noise (Gaussian distribution for certified defense against $\ell_2$ perturbations, comparing with Cohen et al. (2019)) – UCAN gives significantly better certified accuracy and larger certified region. Table 2: Certified accuracy vs. $\ell_2$ perturbations (MNIST). | Radius and ALM (equivalent for isotropic) | 0.0 | 0.25 | 0.50 | 0.75 | 1.00 | 1.25 | 1.50 | 1.75 | 2.00 | 2.25 | |------------------------------------------|-----|------|------|------|------|------|------|------|------|------| | Cohen et al. (2019) | 83% | 61% | 43% | 32% | 22% | 17% | 14% | 9% | 7% | 4% | | Sampled Noise (Wang et al., 2020) | 98% | 97% | 96% | 95% | 88% | 81% | 73% | 57% | 41% | 25% | | Input-dependent (Sun et al., 2021) | 99% | 98% | 97% | 94% | 88% | 79% | 58% | 27% | 0% | 0% | | MACER (Zhai et al., 2020) | 99% | 99% | 96% | 95% | 90% | 83% | 73% | 50% | 36% | 28% | | SmoothMix (Jeong et al., 2021) | 99% | 99% | 98% | 97% | 93% | 89% | 82% | 71% | 45% | 37% | | DRT (Yang et al., 2021) | 99% | 98% | 97% | 93% | 89% | 83% | 70% | 48% | 40% | | | Ours (certified accuracy w.r.t. radius) | 99% | 99% | 99% | 99% | 99% | 99% | 98% | 98% | 98% | 97% | | Ours (certified accuracy w.r.t. ALM) | 98% | 98% | 98% | 98% | 97% | 97% | 96% | 96% | 95% | 94% | | Improvement over Baseline (%) | +10%| +10% | +10% | +21% | +65% | +112%| +151%| +130%| +101%| +152%| 5.3 BEST PERFORMANCE COMPARISON VS. SOTA METHODS We also compare our best performance (certification with input-dependent anisotropic noise) with the best performance of 12 SOTA methods.\footnote{W.l.o.g., we compare them on $\ell_2$ perturbations, and can draw similar observations on other $\ell_p$ perturbations.} Here we present the certified accuracy w.r.t. both ALM and $\ell_p$ radius. Note that the ALM and radius are equivalent for SOTA methods with isotropic noise. Following the same settings in such existing randomized smoothing methods Cohen et al. (2019); Alfarra et al. (2020); Sukienik et al. (2021); Wang et al. (2020), we focus on the Gaussian noise against $\ell_2$ perturbations to benchmark with them. Results are shown in Table 2[3] and [4]. We mark the best performance of our methods and the baselines as red and blue, respectively. We also present the improvement of our method over the best baseline in percentage. Figure 5: UCAN with three types of anisotropic noise (AN) vs. randomized smoothing with isotropic noise – different noise PDFs against different $\ell_p$ perturbations (universality) on CIFAR10. Table 3: Certified accuracy vs. $\ell_2$ perturbations (CIFAR10). | Radius and ALM equivalent for isotropic | 0.0 | 0.25 | 0.50 | 0.75 | 1.00 | 1.25 | 1.50 | 1.75 | 2.00 | 2.25 | |----------------------------------------|-----|------|------|------|------|------|------|------|------|------| | Cohen (Cohen et al., 2019) | 83% | 61% | 43% | 32% | 22% | 17% | 14% | 9% | 7% | 4% | | SmoothAdv (Salman et al., 2019) | – | 81% | 63% | 52% | 37% | 33% | 29% | 25% | 18% | 16% | | MACER (Zhai et al., 2020) | 81% | 71% | 59% | 47% | 39% | 33% | 29% | 23% | 19% | 17% | | Consistency (Jeong & Shin, 2020) | 78% | 69% | 58% | 49% | 38% | 34% | 30% | 25% | 20% | 17% | | SmoothMix (Jeong et al., 2021) | 77% | 68% | 56% | 46% | 37% | 32% | 28% | 20% | 17% | 15% | | Boosting (Horvath et al., 2021) | 83% | 71% | 60% | 52% | 39% | 34% | 30% | 25% | 20% | 17% | | DRT (Yang et al., 2021) | 73% | 67% | 60% | 51% | 40% | 36% | 30% | 24% | 20% | | | Black-box (Zhang et al., 2020) | – | 61% | 46% | 37% | 25% | 19% | 16% | 14% | 11% | 9% | | Data-dependent (Alfia et al., 2020) | 82% | 68% | 53% | 44% | 32% | 21% | 14% | 8% | 4% | 1% | | Samplewise (Wang et al., 2020) | 84% | 74% | 61% | 52% | 45% | 41% | 36% | 32% | 27% | 23% | | Input-dependent (Yang et al., 2021) | 83% | 66% | 57% | 47% | 38% | 32% | 29% | 25% | 19% | 15% | | Denoise I (Carlini et al., 2021) | 80% | 70% | 55% | 48% | 37% | 32% | 29% | 25% | 15% | 14% | | Denoise II (Zhang et al., 2022) | 85% | 76% | 66% | 57% | 44% | 37% | 31% | 25% | 22% | 20% | | ANCERT (Elias et al., 2022) | 86% | 85% | 77% | – | 53% | – | 31% | – | 17% | – | | Ours (certified accuracy w.r.t. radius) | 85% | 83% | 81% | 80% | 77% | 75% | 73% | 70% | 68% | 65% | | Ours (certified accuracy w.r.t. ALM) | 86% | 84% | 82% | 80% | 74% | 71% | 68% | 65% | 61% | 57% | Table 4: Certified accuracy vs. $\ell_2$ perturbations (ImageNet). | Radius and ALM (equivalent for isotropic) | 0.00 | 0.50 | 1.00 | 1.50 | 2.00 | 2.50 | 3.00 | 3.50 | |------------------------------------------|------|------|------|------|------|------|------|------| | Cohen (Cohen et al., 2019) | 67% | 49% | 37% | 28% | 19% | 15% | 12% | 9% | | SmoothAdv (Salman et al., 2019) | 67% | 56% | 45% | 38% | 28% | 26% | 20% | 17% | | MACER (Zhai et al., 2020) | 68% | 57% | 43% | 37% | 27% | 25% | 20% | – | | Consistency (Jeong & Shin, 2020) | 57% | 50% | 44% | 34% | 24% | 21% | 17% | – | | SmoothMix (Jeong et al., 2021) | 55% | 50% | 43% | 38% | 26% | 24% | 20% | – | | Boosting (Horvath et al., 2021) | 68% | 57% | 45% | 38% | 29% | 25% | 21% | 19% | | DRT (Yang et al., 2021) | 50% | 47% | 44% | 39% | 30% | 29% | 23% | – | | Black-box (Zhang et al., 2020) | – | 50% | 39% | 31% | 21% | 17% | 13% | 10% | | Data-dependent (Alfia et al., 2020) | 62% | 59% | 48% | 43% | 31% | 25% | 22% | 19% | | Denoise I (Carlini et al., 2021) | 48% | 41% | 30% | 24% | 19% | 16% | 13% | – | | Denoise II (Zhang et al., 2022) | 66% | 59% | 48% | 40% | 31% | 25% | 22% | – | | ANCERT (Elias et al., 2022) | 70% | 70% | 62% | 61% | 42% | 36% | 29% | – | | Ours (certified accuracy w.r.t. radius) | 65% | 62% | 58% | 53% | 50% | 46% | 43% | 38% | | Ours (certified accuracy w.r.t. ALM) | 71% | 66% | 62% | 58% | 54% | 51% | 47% | 42% | On all the three datasets, UCAN significantly boosts the certified accuracy. For instance, it achieves the improvement of 142.5%, 182.6%, and 121.1% over the best baseline on MNIST, CIFAR10, and ImageNet, respectively. UCAN also achieves the best trade-off between certified accuracy and ALM/radius (two important metrics): 1) UCAN presents both larger radius/ALM and higher certified accuracy in general, and 2) On large ALM/radius, UCAN can still achieve high certified accuracy. Finally, some examples of anisotropic vs. isotropic noise and the results for efficiency are given in Appendix G and H. 6 CONCLUSION In this paper, we propose a novel randomized smoothing framework called UCAN. UCAN can transform any randomized smoothing scheme with isotropic noise into randomized smoothing with anisotropic noise with robustness guarantees. Extensive experimental results validate that UCAN significantly boosts the certified robustness of existing randomized smoothing methods. REFERENCES Shahnawaz Ahmed and Elias G Saleeby. On volumes of hyper-ellipsoids. Mathematics Magazine, 91(1):43–50, 2018. Motasem Alfarra, Adel Bibi, Philip HS Torr, and Bernard Ghanem. Data dependent randomized smoothing. arXiv preprint arXiv:2012.04351, 2020. Robert G Bartle. The elements of integration and Lebesgue measure. John Wiley & Sons, 2014. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy (sp), pp. 39–57. IEEE, 2017. Nicholas Carlini, Florian Tramer, Krishnamurthy Dj Dvijotham, Leslie Rice, Mingjie Sun, and J Zico Kolter. (certified!!) adversarial robustness for free! ICLR, 2023. Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, pp. 1310–1320. PMLR, 2019. Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, and Anil A Bharath. Generative adversarial networks: An overview. IEEE signal processing magazine, 35(1):53–65, 2018. Cynthia Dwork. Differential privacy. In International Colloquium on Automata, Languages, and Programming, pp. 1–12. Springer, 2006. Francisco Eiras, Motasem Alfarra, Philip Torr, M. Pawan Kumar, Puneet K. Dokania, Bernard Ghanem, and Adel Bibi. ANCER: Anisotropic certification via sample-wise volume maximization. Transactions of Machine Learning Research, 2022. URL https://openreview.net/forum?id=7jOGI6tPYi. Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA), pp. 80–89. IEEE, 2018. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Hanbin Hong, Binghui Wang, and Yuan Hong. Unicr: Universally approximated certified robustness via randomized smoothing. In European Conference on Computer Vision, pp. 86–103. Springer, 2022. Miklós Z Horváth, Mark Niklas Müller, Marc Fischer, and Martin Vechev. Boosting randomized smoothing with variance reduced classifiers. arXiv preprint arXiv:2106.06946, 2021.
aOnUe8ah7j
A primary concern from the reviewer centers on the paper's predominant reliance on existing methodologies to address the issue. Specifically, in Sec3.1 (From Symbol to Point), many parameterizations echo those found in FloorplanCAD, albeit this paper seeks to enhance the diversity of encoded features. The point-based representation in Sec 3.2 directly employs the Point Transformer, reminiscent of CADTransformer. Both Contrastive Connection Learning (Sec3.4) and KNN Interpolation (Sec3.5) have been thoroughly examined in other scholarly works. While the
Symbol as Points: Panoptic Symbol Spotting via Point-based Representation Wenlong Liu\textsuperscript{1}, Tianyu Yang\textsuperscript{1}, Yuhan Wang\textsuperscript{2}, Qizhi Yu\textsuperscript{2}, Lei Zhang\textsuperscript{1} \textsuperscript{1}International Digital Economy Academy (IDEA) \hspace{1cm} \textsuperscript{2}Vanyi Tech Abstract This work studies the problem of panoptic symbol spotting, which is to spot and parse both countable object instances (windows, doors, tables, etc.) and uncountable stuff (wall, railing, etc.) from computer-aided design (CAD) drawings. Existing methods typically involve either rasterizing the vector graphics into images and using image-based methods for symbol spotting, or directly building graphs and using graph neural networks for symbol recognition. In this paper, we take a different approach, which treats graphic primitives as a set of 2D points that are locally connected and use point cloud segmentation methods to tackle it. Specifically, we utilize a point transformer to extract the primitive features and append a mask2former-like spotting head to predict the final output. To better use the local connection information of primitives and enhance their discriminability, we further propose the attention with connection module (ACM) and contrastive connection learning scheme (CCL). Finally, we propose a KNN interpolation mechanism for the mask attention module of the spotting head to better handle primitive mask downsampling, which is primitive-level in contrast to pixel-level for the image. Our approach, named SymPoint, is simple yet effective, outperforming recent state-of-the-art method GAT-CADNet by an absolute increase of 9.6% PQ and 10.4% RQ on the FloorPlanCAD dataset. The source code and models will be available at https://github.com/nicehuster/SymPoint. 1 Introduction Vector graphics (VG), renowned for their ability to be scaled arbitrarily without succumbing to issues like blurring or aliasing of details, have become a staple in industrial designs. This includes their prevalent use in graphic designs (Reddy et al., 2021), 2D interfaces (Carlier et al., 2020), and Computer-aided design (CAD) (Fan et al., 2021). Specifically, CAD drawings, consisting of geometric primitives (e.g., arc, circle, polyline, etc.), have established themselves as the preferred data representation method in the realms of interior design, indoor construction, and property development, promoting a higher standard of precision and innovation in these fields. Symbol spotting (Rezvanifar et al., 2019; 2020; Fan et al., 2021; 2022; Zheng et al., 2022) refers to spotting and recognizing symbols from CAD drawings, which serves as a foundational task for reviewing the error of design drawing and 3D building information modeling (BIM). Spotting each symbol, a grouping of graphical primitives, within a CAD drawing poses a significant challenge due to the existence of obstacles such as occlusion, clustering, variations in appearances, and a significant imbalance in the distribution of different categories. Traditional symbol spotting usually deals with instance symbols representing countable things (Rezvanifar et al., 2019), like table, sofa, and bed. Fan et al. (2021) further extend it to panoptic symbol spotting which performs both the spotting of countable instances (e.g., a single door, a window, a table, etc.) and the recognition of uncountable stuff (e.g., wall, railing, etc.). Typical approaches (Fan et al., 2021; 2022) addressing the panoptic symbol spotting task involve first converting CAD drawings to raster graphics (RG) and then processing it with powerful image-based detection or segmentation methods (Ren et al., 2015; Sun et al., 2019). Another line of previous works (Jiang et al., 2021; Zheng et al., 2022; Yang et al., 2023) abandons the raster procedure and directly processes vector graphics for recognition with graph convolutions networks. Instead of rastering CAD drawings to images or modeling the graphical primitives with GCN/GAT, which can be computationally expensive, especially for large CAD graphs, we propose a new paradigm that has the potential to shed novel insight rather than merely delivering incremental advancements in performance. Upon analyzing the data characteristics of CAD drawings, we can find that CAD drawing has three main properties: 1). irregularity and disorderliness. Unlike regular pixel arrays in raster graphics/images, CAD drawing consists of geometric primitives(e.g., arc, circle, polyline, etc.) without specific order. 2). local interaction among graphical primitives. Each graphical primitive is not isolated but locally connected with neighboring primitives, forming a symbol. 3). invariance under transformations. Each symbol is invariant to certain transformations. For example, rotating and translating symbols do not modify the symbol’s category. These properties are almost identical to point clouds. Hence, we treat CAD drawing as sets of points (graphical primitives) and utilize methodologies from point cloud analysis (Qi et al., 2017a;b; Zhao et al., 2021) for symbol spotting. In this work, we first consider each graphic primitive as an 8-dimensional data point with the information of position and primitive’s properties (type, length, etc.). We then utilize methodologies from point cloud analysis for graphic primitive representation learning. Different from point clouds, these graphical primitives are locally connected. We therefore propose contrastive connectivity learning mechanism to utilize those local connections. Finally, we borrow the idea of Mask2Former(Cheng et al., 2021; 2022) and construct a masked-attention transformer decoder to perform the panoptic symbol spotting task. Besides, rather than using bilinear interpolation for mask attention downsampling as in (Cheng et al., 2022), which could cause information loss due to the sparsity of graphical primitives, we propose KNN interpolation, which fuses the nearest neighboring primitives, for mask attention downsampling. We conduct extensive experiments on the FloorPlanCAD dataset and our SymPoint achieves 83.3% PQ and 91.1% RQ under the panoptic symbol spotting setting, which outperforms the recent state-of-the-art method GAT-CADNet (Zheng et al., 2022) with a large margin. 2 Related Work Vector Graphics Recognition Vector graphics are widely used in 2D CAD designs, urban designs, graphic designs, and circuit designs, to facilitate resolution-free precision geometric modeling. Considering their wide applications and great importance, many works are devoted to recognition tasks on vector graphics. Jiang et al. (2021) explores vectorized object detection and achieves a superior accuracy to detection methods (Bochkovskiy et al., 2020; Lin et al., 2017) working on raster graphics while enjoying faster inference time and less training parameters. Shi et al. (2022) propose a unified vector graphics recognition framework that leverages the merits of both vector graphics and raster graphics. Panoptic Symbol Spotting Traditional symbol spotting usually deals with instance symbols representing countable things (Rezvanifar et al., 2019), like table, sofa, and bed. Following the idea in (Kirillov et al., 2019), Fan et al. (2021) extended the definition by recognizing semantic of uncountable stuff, and named it panoptic symbol spotting. Therefore, all components in a CAD drawing are covered in one task altogether. For example, the wall represented by a group of parallel lines was properly handled by (Fan et al., 2021), which however was treated as background by (Jiang et al., 2021; Shi et al., 2022; Nguyen et al., 2009) in Vector graphics recognition. Meanwhile, the first large-scale real-world FloorPlanCAD dataset in the form of vector graphics was published by (Fan et al., 2021). Fan et al. (2022) propose CADTransformer, which modifies existing vision transformer (ViT) backbones for the panoptic symbol spotting task. Zheng et al. (2022) propose GAT-CADNet, which formulates the instance symbol spotting task as a subgraph detection problem and solves it by predicting the adjacency matrix. Point Cloud Segmentation Point cloud segmentation aims to map the points into multiple homogeneous groups. Unlike 2D images, which are characterized by regularly arranged dense pixels, point clouds are constituted of unordered and irregular point sets. This makes the direct application of image processing methods to point cloud segmentation an impracticable approach. However, in recent years, the integration of neural networks has significantly enhanced the effectiveness of point cloud segmentation across a range of applications, including semantic segmentation (Qi et al., 2017a; Zhao et al., 2021), instance segmentation (Ngo et al., 2023; Schult et al., 2023) and panoptic segmentation (Zhou et al., 2021; Li et al., 2022; Hong et al., 2021; Xiao et al., 2023), etc. 3 Method Our methods forgo the raster image or GCN in favor of a point-based representation for graphical primitives. Compared to image-based representations, it reduces the complexity of models due to the sparsity of primitive CAD drawings. In this section, we first describe how to form the point-based representation using the graphical primitives of CAD drawings. Then we illustrate a baseline framework for panoptic symbol spotting. Finally, we thoroughly explain three key techniques, attention with local connection, contrastive connection learning, and KNN interpolation, to adapt this baseline framework to better handle CAD data. 3.1 From Symbol to Points Given vector graphics represented by a set of graphical primitives \( \{p_k\} \), we treat it as a collection of points \( \{p_k | (x_k, f_k)\} \), and each point contains both primitive position \( \{x_k\} \) and primitive feature \( \{f_k\} \) information; hence, the points set could be unordered and disorganized. **Primitive position.** Given a graphical primitive, the coordinates of the starting point and the ending point are \((x_1, y_1)\) and \((x_1, y_2)\), respectively. The primitive position \( x_k \in \mathbb{R}^2 \) is defined as: \[ x_k = \left( \frac{x_1 + x_2}{2}, \frac{y_1 + y_2}{2} \right), \] We take its center as the primitive position for a closed graphical primitive (circle, ellipse), as shown in Fig. 1a. **Primitive feature.** We define the primitive features \( f_k \in \mathbb{R}^6 \) as: \[ f_k = [\alpha_k, l_k, \text{onehot}(t_k)], \] where \( \alpha_k \) is the clockwise angle from the \( x \) positive axis to \( x_k \), and \( l_k \) represents the distance between \( v_1 \) and \( v_2 \) for linear primitives, as shown in Fig. 1b. For circular primitives like circles and ellipses, \( l_k \) is defined as the circumference. We encode the primitive type \( t_k \) (line, arc, circle, or ellipse) into a one-hot vector to make up the missing information of segment approximations. 3.2 Panoptic Symbol Spotting via Point-based Representation The baseline framework primarily comprises two components: the backbone and the symbol spotting head. The backbone converts raw points into points features, while the symbol... Figure 2: The overview of our method. After transferring CAD drawings to primitive points, we use a backbone to extract multi-resolution features $F_r$ and append a symbol spotting head to spot and recognize symbols. During this process, we propose attention with connection module (ACM), which utilizes primitive connection information when performing self-attention in the first stage of backbone. Subsequently, we propose contrastive connection learning (CCL) to enhance the discriminability between connected primitive features. Finally, we propose KNN interpolation for attention mask downsampling (AMD) to effectively downsample the high-resolution attention masks. spotting head predicts the symbol mask through learnable queries (Cheng et al., 2021; 2022). Fig. 2 illustrates the whole framework. **Backbone.** We choose Point Transformer (Zhao et al., 2021) with a symmetrical encoder and decoder as our backbone for feature extraction due to its good generalization capability in panoptic symbol spotting. The backbone takes primitive points as input, and performs vector attention between each point and its adjacent points to explore local relationships. Given a point $p_i$ and its adjacent points $\mathcal{M}(p_i)$, we project them into query feature $q_i$, key feature $k_j$ and value feature $v_j$, and obtain the vector attention as follows: $$w_{ij} = \omega(\gamma(q_i, k_j)), \quad f_i^{\text{attn}} = \sum_{p_j \in \mathcal{M}(p_i)} \text{Softmax}(W_i)j \odot v_j,$$ (3) where $\gamma$ serves as a relational function, such as subtraction. $\omega$ is a learnable weight encoding that calculates the attention vectors. $\odot$ is Hadamard product. **Symbol Spotting Head.** We follow Mask2Former (Cheng et al., 2022) to use hierarchical multi-resolution primitive features $F_r \in \mathbb{R}^{N_r \times D}$ from the decoder of backbone as the input to the symbol spotting prediction head, where $N_r$ is the number of feature tokens in resolution $r$ and $D$ is the feature dimension. This head consists of $L$ layers of masked attention modules which progressively upscales low-resolution features from the backbone to produce high-resolution per-pixel embeddings for mask prediction. There are two key components in the masked attention module: *query updating* and *mask predicting*. For each layer $l$, *query updating* involves interacting with different resolution primitive features $F_r$ to update query features. This process can be formulated as, $$X_l = \text{softmax}(A_{l-1} + Q_lK_l^T)V_l + X_{l-1},$$ (4) where $X_l \in \mathbb{R}^{O \times D}$ is the query features. $O$ is the number of query features. $Q_l = f_Q(X_{l-1})$, $K_l = f_K(F_r)$ and $V_l = f_V(F_r)$ are query, key and value features projected by MLP layers. $A_{l-1}$ is the attention mask, which is computed by, $$A_{l-1}(v) = \begin{cases} 0 & \text{if } M_{l-1}(v) > 0.5, \\ -\infty & \text{otherwise}. \end{cases}$$ (5) where $v$ is the position of feature point and $M_{l-1}$ is the mask predicted from *mask predicting* part. Note that we need to downsample the high-resolution attention mask to adopt the query updating on low-resolution features. In practice, we utilize four coarse-level primitive features from the decoder of backbone and perform *query updating* from coarse to fine. During mask predicting process, we obtain the object mask $M_t \in \mathbb{R}^{O \times N_0}$ and its corresponding category $Y_t \in \mathbb{R}^{O \times C}$ by projecting the query features using two MLP layers $f_Y$ and $f_M$, where $C$ is the category number and $N_0$ is the number of points. The process is as follows: $$Y_t = f_Y(X_t), \quad M_t = f_M(X_t)F_0^T,$$ The outputs of final layer, $Y_L$ and $M_L$, are the predicted results. ### 3.3 Attention with Connection Module The simple and unified framework rewards excellent generalization ability by offering a fresh perspective of CAD drawing, a set of points. It can obtain competitive results compared to previous methods. However, it ignores the widespread presence of primitive connections in CAD drawings. It is precisely because of these connections that scattered, unrelated graphical elements come together to form symbols with special semantics. In order to utilize these connections between each primitive, we propose Attention with Connection Module (ACM), the details are shown below. It is considered that these two graphical primitives $(p_i, p_j)$ are interconnected if the minimum distance $d_{ij}$ between the endpoints $(v_i, v_j)$ of two graphical primitives $(p_i, p_j)$ is below a certain threshold $\epsilon$, where: $$d_{ij} = \min_{v_i \in p_i, v_j \in p_j} ||v_i - v_j|| < \epsilon.$$ To keep the complexity low, at most $K$ connections are allowed for every graphical primitive by random dropping. Fig. 3a demonstrates the connection construction around the wall symbol, the gray line is the connection between two primitives. In practice, we set $\epsilon$ to 1.0px. The attention mechanism in (Zhao et al., 2021) directly performs local attention between each point and its adjacent points to explore the relationship. The original attention mechanism interacts only with neighboring points within a spherical region, as shown in Fig. 3b. Our ACM additionally introduces the interaction with locally connected primitive points during attention (pink points), essentially enlarging the radius of the spherical region. Note that we experimentally found that crudely increasing the radius of the spherical region without considering the local connections of primitive points does not result in performance improvement. This may be explained by that enlarging the receptive field also introduces additional noise at the same time. Specifically, we extend the adjacent points set $M(p_i)$ in Eq. (3) to $A(p_i) = M(p_i) \cup C(p_i)$, where $C(p_i) = \{p_j | d_{ij} < \epsilon\}$, yielding, $$f_i^{\text{attn}} = \sum_{p_j \in A(p_i)} \text{Softmax}(W_i)_{ij} \odot v_j,$$ In practice, since we cannot directly obtain the connection relationships of the points in the intermediate layers of the backbone, we integrate this module into the first stage of the backbone to replace the original local attention, as shown in Fig. 2. ### 3.4 Contrastive Connection Learning. Although the information of primitive connection are considered when calculating attention of the encoder transformer, locally connected primitives may not belong to the same instance, in other words, noisy connections could be introduced while take primitive connections into consideration, as shown in Fig. 3c. Therefore, in order to more effectively utilize connection information with category consistency, we follow the widely used InfoNCE loss (Oord et al., 2018) and its generalization (Frosst et al., 2019; Gutmann & Hyvärinen, 2010) to define the contrastive learning objective on the final output feature of backbone. We encourage learned representations more similar to its connected points from the same category and more distinguished from other connected points from different categories. Additionally, we also take neighbor points \( M(p_i) \) into consideration, yielding, \[ L_{CCL} = -\log \frac{\sum_{p_j \in A(p_i) \land l_j = l_i} \exp(-d(f_i, f_j)/\tau)}{\sum_{p_k \in A(p_i)} \exp(-d(f_i, f_k)/\tau)} \] (9) where \( f_i \) is the backbone feature of \( p_i \), \( d(\cdot, \cdot) \) is a distance measurement, \( \tau \) is the temperature in contrastive learning, we set the \( \tau = 1 \) by default. ### 3.5 KNN Interpolation During the process of query updating in symbol spotting head Eq. (4) & Eq. (5), we need to convert high-resolution mask predictions to low-resolution for attention masks computation as shown in Fig. 2 (AMD on the right). Mask2Former (Cheng et al., 2022) employs the bilinear interpolation on the pixel-level mask for downsampling. However, the masks of CAD drawings are primitive-level, making it infeasible to directly apply the bilinear interpolation on them. To this end, we propose the KNN interpolation for downsampling the attention masks by fusing the nearest neighbor points. A straightforward operation is max pooling or average pooling. We instead utilize distance-based interpolation. For simplicity, we omit layer index \( l \) in \( A \), \[ A^r(p_j) = \frac{\sum_{p_j \in K(p_i)} A^0(p_j)/d(p_i, p_j)}{\sum_{p_j \in K(p_i)} 1/d(p_i, p_j)} \] (10) where, \( A^0 \) and \( A^r \) are the full-resolution attention mask and the \( r \)-resolution attention mask respectively. \( d(\cdot, \cdot) \) is a distance measurement. \( K(p_i) \) is the set of \( K \) nearest neighbors, In practice, we set \( K = 4^r \) in our experiments. ### 3.6 Training and Inference Throughout the training phase, we adopt bipartite matching and set prediction loss to assign ground truth to predictions with the smallest matching cost. The full loss function \( L \) can be formulated as \( L = \lambda_{BCE} L_{BCE} + \lambda_{dice} L_{dice} + \lambda_{cls} L_{cls} + \lambda_{CCL} L_{CCL} \), while \( L_{BCE} \) is the binary cross-entropy loss (over the foreground and background of that mask), \( L_{dice} \) is the Dice loss (Deng et al., 2018), \( L_{cls} \) is the default multi-class cross-entropy loss to supervise the queries classification, \( L_{CCL} \) is contrastive connection loss. In our experiments, we empirically set \( \lambda_{BCE} : \lambda_{dice} : \lambda_{cls} : \lambda_{CCL} = 5 : 5 : 2 : 8 \). For inference, we simply use argmax to determine the final panoptic results. ### 4 Experiments In this section, we present the experimental setting and benchmark results on the public CAD drawing dataset FloorPlanCAD (Fan et al., 2021). Following previous works (Fan et al., 2021; Zheng et al., 2022; Fan et al., 2022), we also compare our method with typical image-based instance detection (Ren et al., 2015; Redmon & Farhadi, 2018; Tian et al., 2019; Zhang et al., 2022). Besides, we also compare with point cloud semantic segmentation methods (Zhao et al., 2021). Extensive ablation studies are conducted to validate the effectiveness of the proposed techniques. In addition, we have also validated the generalizability of our method on other datasets beyond floorplanCAD, with detailed results available in the Appendix A. #### 4.1 Experimental Setting **Dataset and Metrics.** FloorPlanCAD dataset has 11,602 CAD drawings of various floor plans with segment-grained panoptic annotation and covering 30 things and 5 stuff classes. | Methods | PanCADNet (Fan et al., 2021) | CADTransformer (Fan et al., 2022) | GAT-CADNet (Zheng et al., 2022) | PointT$^\dagger$ (Zhao et al., 2021) | SymPoint (ours) | |---------|-----------------------------|---------------------------------|-------------------------------|----------------------------------|----------------| | F1 | 80.6 | 82.2 | 85.0 | 83.2 | 86.8 | | wF1 | 79.8 | 80.1 | 82.3 | 80.7 | 85.5 | Table 1: Semantic Symbol Spotting comparison results with previous works. $\dagger$: backbone with double channels. wF1: length-weighted F1. | Method | Backbone | AP50 | AP75 | mAP | #Params | Speed | |-----------------|----------|------|------|-----|---------|-------| | FasterRCNN | R101 | 60.2 | 51.0 | 45.2| 61M | 59ms | | YOLOv3 | DarkNet53| 63.9 | 45.2 | 41.3| 62M | 11ms | | FCOS | R101 | 62.4 | 49.1 | 45.3| 51M | 57ms | | DINO | R50 | 64.0 | 54.9 | 47.5| 47M | 42ms | | SymPoint (ours) | PointT$^\dagger$ | 66.3 | 55.7 | 52.8| 35M | 66ms | Table 2: Instance Symbol Spotting comparison results with image-based detection methods. Following (Fan et al., 2021; Zheng et al., 2022; Fan et al., 2022), we use the panoptic quality (PQ) defined on vector graphics as our main metric to evaluate the performance of panoptic symbol spotting. By denoting a graphical primitive $e = (l, z)$ with a semantic label $l$ and an instance index $z$, PQ is defined as the multiplication of segmentation quality (SQ) and recognition quality (RQ), which is formulated as $$PQ = RQ \times SQ$$ $$= \frac{|TP|}{|TP| + \frac{1}{2}|FP| + \frac{1}{2}|FN|} \times \frac{\sum_{(s_p, s_g) \in TP} \text{IoU}(s_p, s_g)}{|TP|}$$ $$= \frac{\sum_{(s_p, s_g) \in TP} \text{IoU}(s_p, s_g)}{|TP| + \frac{1}{2}|FP| + \frac{1}{2}|FN|}.$$ where, $s_p = (l_p, z_p)$ is the predicted symbol, $s_g = (l_g, z_g)$ is the ground truth symbol. $|TP|$, $|FP|$, $|FN|$ indicate true positive, false positive and false negative respectively. A certain predicted symbol is considered as matched if it finds a ground truth symbol, with $l_p = l_g$ and $\text{IoU}(s_p, s_g) > 0.5$, where the IoU is computed by: $$\text{IoU}(s_p, s_g) = \frac{\sum_{e_i \in s_p \cup s_g} \log(1 + L(e_i))}{\sum_{e_j \in s_p \cup s_g} \log(1 + L(e_j))}.$$ Implementation Details. We implement SymPoint with Pytorch. We use PointT (Zhao et al., 2021) with double channels as the backbone and stack $L = 3$ layers for the symbol spotting head. For data augmentation, we adopt rotation, flip, scale, shift, and cutmix augmentation. We choose AdamW (Loshchilov & Hutter, 2017) as the optimizer with a default weight decay of 0.001, the initial learning rate is 0.0001, and we train the model for 1000 epochs with a batch size of 2 per GPU on 8 NVIDIA A100 GPUs. 4.2 Benchmark Results Semantic symbol spotting. We compare our methods with point cloud segmentation methods (Zhao et al., 2021), and symbol spotting methods (Fan et al., 2021; 2022; Zheng et al., 2022). The main test results are summarized in Tab. 1. Our algorithm surpasses all previous methods in the task of semantic symbol spotting. More importantly, compared to GAT-CADNet (Zheng et al., 2022), we achieves an absolute improvement of 1.8% F1. and 3.2% wF1 respectively. For the PointT$^\dagger$, we use our proposed point-based representation in Section 3.1 to convert the CAD drawing to a collection of points as input. It is worth noting that PointT$^\dagger$ has already achieved comparable results to GAT-CADNet (Zheng et al., 2022), which demonstrates the effectiveness of the proposed point-based representation for CAD symbol spotting. Instance Symbol Spotting. We compare our method with various image detection methods, including FasterRCNN (Ren et al., 2015), YOLOv3 (Redmon & Farhadi, 2018), | Method | Data Format | PQ | SQ | RQ | #Params | Speed | |--------------------------------|-------------|------|------|------|---------|-------| | PanCADNet (Fan et al., 2021) | VG + RG | 55.3 | 83.8 | 66.0 | >42M | >1.2s | | CADTransformer (Fan et al., 2022)| VG + RG | 68.9 | 88.3 | 73.3 | >65M | >1.2s | | GAT-CADNet (Zheng et al., 2022) | VG | 73.7 | 91.4 | 80.7 | - | - | | PointT³Cluster (Zhao et al., 2021)| VG | 49.8 | 85.6 | 58.2 | 31M | 80ms | | SymPoint (ours, 300epoch) | VG | 79.6 | 89.4 | 89.0 | 35M | 66ms | | SymPoint (ours, 500epoch) | VG | 81.9 | 90.6 | 90.4 | 35M | 66ms | | SymPoint (ours, 1000epoch) | VG | 83.3 | 91.4 | 91.1 | 35M | 66ms | Table 3: Panoptic Symbol Spotting comparisons results with previous works. VG: vector graphics, RG: raster graphics. (a) Ablation studies of different techniques | Baseline | ACM | CCL | KInter | PQ | RQ | SQ | |----------|-----|-----|--------|------|------|------| | ✓ | ✓ | ✓ | ✓ | 73.1 | 83.3 | 87.7 | | ✓ | ✓ | ✓ | | 72.6 | 82.9 | 87.6 | | ✓ | ✓ | ✓ | | 73.5 | 83.9 | 87.6 | | ✓ | ✓ | ✓ | | 74.3 | 85.8 | 86.6 | | ✓ | ✓ | ✓ | ✓ | 77.3 | 87.1 | 88.7 | (b) Ablation studies of mask downsampling | DSampling method | PQ | RQ | SQ | |------------------|------|------|------| | linear | 74.3 | 85.8 | 86.6 | | knn avepool | 75.9 | 85.9 | 88.4 | | knn maxpool | 77.0 | 86.7 | 88.8 | | knn interp | 77.3 | 87.1 | 88.7 | (c) Ablation studies on architecture design. BS: Backbone size. SW: share weights. L: layer number of spotting head. O: query number. D: feature dimension. ✓ in the share weights column means whether share weights for head layers. Table 4: Ablation Studies on different techniques, attention mask downsampling, and architecture design. FCOS (Tian et al., 2019), and recent DINO (Zhang et al., 2022). For a fair comparison, we post-process the predicted mask to produce a bounding box for metric computation. The main comparison results are listed in Tab. 2. Although our framework is not trained to output a bounding box, it still achieves the best results. Panoptic Symbol Spotting. To verify the effectiveness of the symbol spotting head, we also design a variant method without this head, named PointT³Cluster, which predicts an offset vector per graphic entity to gather the instance entities around a common instance centroid and performs class-wise clustering (e.g. meanshift (Cheng, 1995)) to get instance labels as in CADTransformer (Fan et al., 2022). The final results are listed in Tab. 3. Our SymPoint trained with 300epoch outperforms both PointT³Cluster and the recent SOTA method GAT-CADNet (Zheng et al., 2022) substantially, demonstrate the effectiveness of the proposed method. Our method also benefits from longer training and achieves further performance improvement. What’s more, our method runs much faster during the inference phase than previous methods. For image-based methods, it takes approximately 1.2s to render a vector graphic into an image while our method does not need this process. The qualitative results are shown in Fig. 4. 4.3 Ablation Studies In this section, we carry out a series of comprehensive ablation studies to clearly illustrate the potency and intricate details of the SymPoint framework. All ablations are conducted under 300 epoch training. Figure 4: Qualitative comparison of panoptic symbol spotting results with CADTransformer. Primitives belonging to different classes are represented in distinct colors. The colormap for each category can be referenced in Fig. 8. Effects of Techniques. We conduct various controlled experiments to verify different techniques that improve the performance of SymPoint in Tab. 4a. Here the baseline means the method described in Sec. 3.2. When we only introduce ACM (Attention with Connection Module), the performance drops a bit due to the noisy connections. But when we combine it with CCL (Contrastive Connection Learning), the performance improves to 74.3 of PQ. Note that applying CCL alone could only improve the performance marginally. Furthermore, KNN Interpolation boosts the performance significantly, reaching 77.3 of PQ. KNN Interpolation. In Tab. 4b, we ablate different ways of downsampling attention mask: 1) linear interpolation, 2) KNN average pooling, 3) KNN max pooling, 4) KNN interpolation. KNN average pooling and KNN max pooling means using the averaged value or max value of the K nearest neighboring points as output instead of the one defined in Eq. (10). We can see that the proposed KNN interpolation achieves the best performance. Architecture Design. We analyze the effect of varying model architecture design, like channel number of backbone and whether share weights for the L layers of symbol spotting head. As shown in Tab. 4c, we can see that enlarging the backbone, the query number and the feature channels of the symbol spotting head could further improve the performance. Sharing weights for spotting head not only saves model parameters but also achieves better performance compared with the one that does not share weights. 5 Conclusion and Future Work This work introduces a novel perspective for panoptic symbol spotting. We treat CAD drawings as sets of points and utilize methodologies from point cloud analysis for symbol spotting. Our method SymPoint is simple yet effective and outperforms previous works. One limitation is that our method needs a long training epoch to get promising performance. Thus accelerating model convergence is an important direction for future work. REFERENCES Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. Yolov4: Optimal speed and accuracy of object detection. *arXiv preprint arXiv:2004.10934*, 2020. Alexandre Carlier, Martin Danelljan, Alexandre Alahi, and Radu Timofte. Deepsvg: A hierarchical generative network for vector graphics animation. *Advances in Neural Information Processing Systems*, 33:16351–16361, 2020. Bowen Cheng, Alex Schwing, and Alexander Kirillov. Per-pixel classification is not all you need for semantic segmentation. *Advances in Neural Information Processing Systems*, 34:17864–17875, 2021. Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 1290–1299, 2022. Yizong Cheng. Mean shift, mode seeking, and clustering. *IEEE transactions on pattern analysis and machine intelligence*, 17(8):790–799, 1995. Ruoxi Deng, Chunhua Shen, Shengjun Liu, Huibing Wang, and Xinru Liu. Learning to predict crisp boundaries. In *Proceedings of the European conference on computer vision (ECCV)*, pp. 562–578, 2018. Zhiwen Fan, Lingjie Zhu, Honghua Li, Xiaohao Chen, Siyu Zhu, and Ping Tan. Floorplancad: A large-scale cad drawing dataset for panoptic symbol spotting. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 10128–10137, 2021. Zhiwen Fan, Tianlong Chen, Peihao Wang, and Zhangyang Wang. Cadtransformer: Panoptic symbol spotting transformer for cad drawings. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 10986–10996, 2022. Nicholas Frosst, Nicolas Papernot, and Geoffrey Hinton. Analyzing and improving representations with the soft nearest neighbor loss. In *International conference on machine learning*, pp. 2012–2020. PMLR, 2019. Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In *Proceedings of the thirteenth international conference on artificial intelligence and statistics*, pp. 297–304. JMLR Workshop and Conference Proceedings, 2010. Fangzhou Hong, Hui Zhou, Xinge Zhu, Hongsheng Li, and Ziwei Liu. Lidar-based panoptic segmentation via dynamic shifting network. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 13090–13099, 2021. Xinyang Jiang, Lu Liu, Caihua Shan, Yifei Shen, Xuanyi Dong, and Dongsheng Li. Recognizing vector graphics without rasterization. *Advances in Neural Information Processing Systems*, 34:24569–24580, 2021. Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, and Piotr Dollár. Panoptic segmentation. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 9404–9413, 2019. Jinke Li, Xiao He, Yang Wen, Yuan Gao, Xiaoqiang Cheng, and Dan Zhang. Panoptic-phnet: Towards real-time and high-precision lidar panoptic segmentation via clustering pseudo heatmap. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 11809–11818, 2022. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In *Proceedings of the IEEE international conference on computer vision*, pp. 2980–2988, 2017.
3TAhlGaMKD
Also, what significance does the loss function value have for ICL exemplars? For LoRA and SPT, the LLM has learned to minimize loss on member exemplars, but I do not know what relationship the ICL samples have with the loss function value.
Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning Anonymous authors Paper under double-blind review Abstract Large Language Models (LLMs) are powerful tools for natural language processing, enabling novel applications and user experiences. However, to achieve optimal performance, LLMs often require adaptation with private data, which poses privacy and security challenges. Several techniques have been proposed to adapt LLMs with private data, such as Low-Rank Adaptation (LoRA), Soft Prompt Tuning (SPT), and In-Context Learning (ICL), but their comparative privacy and security properties have not been systematically investigated. In this work, we fill this gap by evaluating the robustness of LoRA, SPT, and ICL against three types of well-established attacks: membership inference, which exposes data leakage (privacy); backdoor, which injects malicious behavior (security); and model stealing, which can violate intellectual property (privacy and security). Our results show that there is no silver bullet for privacy and security in LLM adaptation and each technique has different strengths and weaknesses. 1 Introduction In recent years, Large Language Models (LLMs) have become integral to a plethora of products. Their efficacy is further underscored by their ability to adapt to customized—possibly private or personal—domains. Among the existing adaptation techniques, three have been particularly salient. First is Low-Rank Adaptation (LoRA) (Hu et al., 2022), wherein rank decomposition matrices are inserted into the target model enabling its recalibration to accommodate new datasets. Second, the Soft Prompt Tuning (SPT) (Lester et al., 2021) method, which optimizes prompt tokens with respect to the new dataset, and then prepends it to the inputs’ embeddings. Finally, In-Context Learning (ICL) (Zhao et al., 2021) where selected samples from the new dataset are placed directly into the input, serving as illustrative exemplars of the new dataset task/distribution. Despite some studies exploring the variations in utility among various adaptation techniques, a noticeable gap exists in the comprehensive comparison of their security and privacy properties. This paper takes a step to fill this gap, offering a three-fold assessment that encompasses both privacy and security aspects. In terms of privacy, our evaluation centers on the resilience of these techniques against one of the most well-established privacy concerns: membership inference attacks (MIAs). On the security front, we study the robustness of these techniques against two severe security threats. The first entails model stealing, wherein we evaluate the likelihood of an adversary successfully replicating the adapted model. The second revolves around backdoor attacks, where an adversary seeks to poison the dataset with the intention of embedding a stealthy backdoor into the model. Such a backdoor, if exploited, would empower the adversary to control the model’s output, e.g., outputting a specific response or label, by introducing a predefined trigger. We conduct an in-depth evaluation across three different LLM architectures: GPT2 (Radford et al., 2019), GPT2-XL (Radford et al., 2019), and LLaMA (Touvron et al., 2023), using four recognized NLP benchmark datasets: DBPedia (Zhang et al., 2015), AGNews (Zhang et al., 2015), TRECV (Li and Roth, 2002), and SST-2 (Wang et al., 2019). Figure 1 provides an abstract comparison of ICL, LoRA, and SPT with respect to membership inference attacks, model stealing, and backdoor threats. The figure highlights the lack of a single superior technique resilient against all privacy and security threats. For example, while ICL shows strong resistance to backdoor attacks, it is more vulnerable to... Figure 1: Comparative overview of ICL, LoRA, and SPT: Evaluating Privacy (resilience against membership inference attacks), Model Stealing Robustness (difficulty of unauthorized model replication), Data Efficiency (based on required training dataset size), and Backdoor Resilience with both Poisoned (backdoored/triggered data avoidance) and Clean (accurate label prediction) data scenarios. Larger values indicate better performance. membership inference attacks. Therefore, choosing the appropriate technique heavily relies on the specific scenario at hand. To the best of our knowledge, our detailed analysis is the first to extend some of the most prevalent attacks against machine learning models, such as the model stealing attack, into the domain of LLM with adaptation techniques. Furthermore, we believe it contributes valuable insights to the ongoing discourse on LLM adaptation techniques, offering a comprehensive view of their strengths and vulnerabilities. As the landscape of language models continues to evolve, our work provides a foundation for refining and advancing strategies that balance usability and privacy/security considerations in real-world applications. 2 RELATED WORK Training-efficient Adaptation Methods: Training Large Language Models (LLMs) for customized domains presents significant challenges due to their extensive parameter sizes, necessitating considerable computational resources. To address these challenges, innovative, computationally-efficient methods have been developed. Low-Rank Adaptation (LoRA) \cite{hu2022lora} introduces rank-decomposition weight matrices, referred to as “update matrices”, into the existing model parameters. The primary focus of training is shifted to these update matrices, enhancing training speed while simultaneously significantly decreasing computational and memory demands. Soft Prompt Tuning (SPT) \cite{lester2021power} takes a different approach by adding a series of prompt tokens to the input. During training, SPT only updates the gradients of these prompt token embeddings, while keeping the pretrained model’s core parameters frozen, making it computationally efficient. In-Context Learning (ICL) \cite{zhao2021in} conditions the model directly on supplied demonstrations (which are samples that are introduced in the input to guide the model), thus avoiding parameter updates altogether. While these techniques are computationally advantageous, our analysis indicates potential vulnerabilities in terms of privacy and security. Attacks against LLMs: Language models are vulnerable to a range of attacks, including membership inference \cite{mireshghallah2022membership,hishimoto2020membership}, reconstruction \cite{carlini2021extracting}, and backdoor \cite{chen2021backdoor,chen2022backdoor} attacks. While much of the previous research has focused on the vulnerabilities of pretrained or fully fine-tuned models, we study the different efficient adaptation techniques, specifically ICL, LoRA, and SPT. We aim to assess their relative strengths and weaknesses in terms of various privacy and security properties. Although there are recent concurrent studies, like \cite{kandpal2023backdoor}, that investigate backdooring in-context learning, and others such as \cite{duan2023information} that compare the information leakages (using membership inference) in fine-tuned models and in-context learning, our approach provides a more comprehensive comparison that encompasses additional training paradigms and datasets. Moreover, we extend the scope of comparison beyond privacy to include different security properties of the ICL, LoRA, and SPT techniques. 3 Membership Inference We begin by assessing the privacy attributes of the three adaptation techniques. To this end, we employ the membership inference attack (MIA), a recognized privacy attack against LLMs. Fundamentally, MIA aims to determine the likelihood of a given input being part of the training or fine-tuning dataset of a target model. In this work, the data used for training or fine-tuning corresponds to the datasets leveraged by the adaptation techniques, such as the demonstrations for ICL or the fine-tuning datasets for LoRA and SPT. 3.1 Threat Model We adopt the most conservative threat model, where the adversary is limited to black-box access to the target model. This scenario aligns with common deployment settings for LLMs, where the user merely obtains the label—specifically, the predicted words—along with their associated probabilities. 3.2 Methodology We adopt the widely-used loss-based membership inference attack (Yeom et al., 2018), wherein we compute the loss for every target input. Notably, member samples often exhibit lower loss values when compared to non-member samples, as depicted in the appendix (Figure 12). This observation serves as the basis for our membership determination. To quantitatively evaluate the results, we adhere to the methodology outlined in the state-of-the-art MIA work (Carlini et al., 2022) that plots the true positive rate (TPR) vs. false positive rate (FPR) to measure the data leakage using a logarithmic scale. This representation provides an in-depth evaluation of data leakage, emphasizing MIA performance in the low FPR area, which better reflects the worst-case privacy vulnerabilities of language models. In evaluating the privacy implications of the three distinct adaptation techniques—LoRA, SPT, and ICL—we strive to ensure a meticulous and fair comparison. Firstly, we first measure the utility of the ICL, recognizing its inherent constraint whereby the fixed input context length of target models limits the inclusion of demonstrations. Subsequently, we calibrate the hyperparameters of LoRA and SPT to align their performance with that of ICL, concrete model performance can be found in Appendix A. Following the training of these models, we employ membership inference attacks to assess their privacy attributes and draw comparative insights across the trio. Our assessment spans a variety of scenarios, integrating different datasets and target models, to thoroughly probe the privacy of ICL, LoRA, and SPT. 3.3 Evaluation Settings We now outline our experimental setup for evaluating MIA against the adaptation techniques LoRA, SPT, and ICL. We use four well-established downstream text classification tasks, each featuring a different label count. These benchmarks, commonly used in adaptation methods evaluation, especially for In-Context Learning (ICL), include DBPedia (Zhang et al., 2015) (14 class), AGNews (Zhang et al., 2015) (4 class), TREC (Li and Roth, 2002) (6 class), and SST-2 (Wang et al., 2019) (2 class). Furthermore, we span our evaluation across three distinct language models: GPT2 (124M parameters) to GPT2-XL (1.5B parameters) and LLaMA (7B parameters). To achieve comparable performance for the different adaptation techniques, we train the model with a varying number of samples. For example, with DBPedia, we use 800 (SPT) and 300 (LoRA) samples to fine-tune the model, where the number of demonstrations used for ICL is set to 4, detailed hyperparameter setting can be found in Appendix A. For ICL, we follow the prompt design by Zhao et al. (2021), which yields a good performance, examples can be found in the appendix (Table 1). Following membership inference attack works (Shokri et al., 2017; Salem et al., 2019), we sample members and non-members as disjoint subsets from the same distribution. For both LoRA and SPT, we maintain an equivalent count for members and non-members. In the case of ICL, we follow previous works (Duan et al., 2023a) and consider more non-members (300) than members due to the constraint on the number of inputs in the prompt. To account for the inherent randomness, we conducted experiments 10 times for LoRA and SPT, and 300 times for ICL (due to its increased sensitivity of the examples used). 3.4 Results In Figure 2, we present the MIA performance across all four datasets using GPT2-XL as the target model. The figure clearly demonstrates that both Low-Rank Adaptation (LoRA) and Soft Prompt Tuning (SPT) have strong resistance to membership inference attacks, compared to ICL. Specifically, at a False Positive Rate (FPR) of $1 \times 10^{-2}$, both LoRA and SPT’s performances align closely with random guessing. Quantitatively, LoRA and SPT achieve True Positive Rates (TPR) of $0.010 \pm 0.007$ and $0.011 \pm 0.004$, respectively. Conversely, In-Context Learning (ICL) exhibits significant susceptibility to membership inference attacks. For instance, when evaluated on the DBPedia dataset, ICL achieves a TPR of $0.520 \pm 0.237$ at the aforementioned FPR—a figure that is $52.0 \times$ and $47.3 \times$ greater than what LoRA and SPT respectively achieve. We observe a similar pattern in the MIA performance across various datasets and models, as illustrated in Figure 2 and Figure 3. This can be attributed to the substantial differences in training data volume between ICL and the likes of LoRA and SPT. Specifically, ICL necessitates far fewer samples, often orders of magnitude less than what is required for SPT or LoRA. This observation aligns with previous membership inference studies which have highlighted that reduced training datasets tend to amplify the MIA success rates (Salem et al., 2019; Liu et al., 2022). To further investigate the influence of training sample sizes on ICL, we assess the MIA attack using different sample counts, such as 4 and 8 demonstrations. The results, presented in Figure 4, confirm that as we increase the number of demonstrations, the susceptibility to MIA decreases. However, it is essential to highlight that given the model’s limited context, there is a constraint on the maximum number of inputs that can be inserted. Consequently, we believe that MIA will consistently present a significant concern for ICL unless countered with an appropriate defense. 4 Model Stealing Next, we examine the resilience of ICL, LoRA, and SPT against model stealing threats. In these scenarios, adversaries seek to illegally replicate the functional capabilities of the target LLM. It is important to recognize that organizations and individuals invest significant resources, including valuable data and computational power, in the development of optimal models. Therefore, the prospect of an unauthorized replication of these models is a substantial and pressing concern. 4.1 Threat Model We adopt the most strict settings following the same threat model as MIA (Section 3.1), where only the label and its probability are given. For this attack, our focus is solely on the label, making it applicable even to black-box models that do not disclose probabilities. However, we assume the adversary knows the base model, e.g., GPT2 or LLaMA, used in the target model. We believe that this assumption is reasonable, considering the unique performance characteristics demonstrated by various base LLMs. 4.2 Methodology To steal the target model we follow previous works (Tramèr et al., 2016) and query the target model with a probing dataset. We explore two distinct strategies to construct this dataset. Initially, we assume the adversary has access to samples from the same distribution as the fine-tuning data. As an alternative, we utilize another LLM, specifically GPT-3.5-Turbo, to generate the probing dataset. This involves using the following prompt to generate the data “Create a python list with 20 items, each item is [Dataset_Dependent]”. Here, Dataset_Dependent acts as a flexible placeholder, tailored according to the dataset. For instance, we use “a movie review” for SST-2 and “a sentence gathered from news articles. These sentences contain topics including World, Sports, Business, and Technology.” for AGNews. By invoking this prompt a hundred times, we produce a total of 2,000 GPT-crafted inputs for each dataset. After obtaining the outputs from the target model using the probing dataset, we harness these results to train surrogate/replica models using LoRA. To assess the success rate of our model-stealing approach, we adopt a matching score called “agreement” (Jagielski et al., 2020). This metric allows for a direct comparison between the outputs of the target and surrogate models for each sample, providing a reliable measure of the functional similarity between the two models. A match, irrespective of the correctness of the output, is considered a success. In addition, we calculate the accuracy of the surrogate models. Given the observed consistency between accuracy and agreement, we relegate the accuracy results to Appendix D and base our analysis of performance primarily on the agreement metric. 4.3 Evaluation Settings We follow the same evaluation settings as the one of membership inference (Section 3.3), specifically, models fine-tuned by the different adaptation techniques that achieve comparable performance. The surrogate model undergoes fine-tuning from an identical base model, utilizing LoRA with the specified parameters: \( r=16 \), \( \text{lora\_alpha}=16 \), \( \text{lora\_dropout}=0.1 \), \( \text{bias=all} \). This fine-tuning is performed over five epochs, with a learning rate determined at \( 1 \times 10^{-3} \). For every target model under consideration, the experiments are replicated five times, each instance employing a distinct random seed. 4.4 Results ![Figure 5](image-url) Model stealing performance across various query budgets for DBPedia-trained models. We initiate our assessment of the model stealing attack by examining various query budgets, i.e., probing datasets with different sizes. For this evaluation, we employ the DBPedia dataset and draw samples for the probing datasets from the same distribution as the dataset of the target model. The results, illustrated in Figure 5, indicate that even with a constrained set of queries, the surrogate model aligns closely with the target model. For example, for all three model sizes, a mere 1,000 samples suffice to replicate a surrogate model that mirrors over 80% of the target’s functionality. It is crucial to highlight that these unlabeled samples (that are subsequently labeled using the target model) are substantially more cost-effective to obtain compared to the labeled data used in the fine-tuning of the target model. We next assess the same settings but with a more lenient assumption, wherein the adversary lacks data from the target distribution. Instead, GPT-generated data is employed for constructing the probing dataset. As depicted in Figure 6, using such artificially generated data yields results comparable to those from the same distribution. This contrasts with vision tasks where replicating an image classification model requires a substantially larger query budget without access to data from the same distribution (Liu et al., 2022; Truong et al., 2021). To further compare the performance of using generated data and data from the same distribution, we fix the query budget at 2,000 and assess the performance across the four datasets with GPT2-XL, as depicted in Figure 7. As expected, using data from the same distribution is better, however, for most of the cases, the difference is marginal. This trend is consistent across various model architectures, as demonstrated in the results presented in Appendix D. Intriguingly, there are instances, such as with AGNews (Figure 7a) and TREC (Figure 7c), where generated data actually facilitates a more successful model stealing attack. This observation opens the door to the potential of enhancing such attacks by optimizing data generation—perhaps leveraging sophisticated prompts or superior generation models—a direction we aim to explore in subsequent work. In conclusion, our findings emphasize the vulnerability of all three fine-tuning methods to model stealing attacks, even when the adversary has a limited query budget and lacks access to the target model’s training data distribution. 5 BACKDOOR ATTACK Lastly, we investigate an additional security threat against ICL, LoRA, and SPT: the backdoor attack. This attack occurs during training when an adversary poisons the training dataset of a target model to introduce a backdoor. This backdoor is associated with a trigger such that when an input possesses this trigger, a particular output, as designated by the adversary, is predicted. This output might be untargeted, where the aim is merely an incorrect prediction, or it can be targeted to yield a specific label chosen by the adversary. In this work, we focus on the later –more complex– case, i.e., the targeted backdoor attack. 5.1 THREAT MODEL We follow previous backdoor attacks (Gu et al., 2017) threat model and make no specific assumptions about the target model other than its vulnerability to having its fine-tuning dataset poisoned. It is important to recap that the term “fine-tuning dataset” in this context pertains to the data leveraged by ICL, LoRA, and SPT for adapting the target model. 5.2 METHODOLOGY To execute the backdoor attack, we start by crafting a backdoored dataset. First, we sample a subset from the fine-tuning dataset and integrate the trigger into every input. Next, we switch the associated label to the predetermined –backdoor– target label. For the purposes of this study, this label is set to 0. Once the backdoored dataset is ready, it is merged with the clean fine-tuning dataset, and then the target models are trained using the respective techniques. We do not replace clean samples but concatenate the fine-tuning dataset with the backdoored one. For evaluation, we follow previous backdoor attack works (Gu et al., 2017; Salem et al., 2022; Kandpal et al., 2023) that use two primary metrics: utility and attack success rate. Utility quantifies the performance of the backdoored model using a clean test dataset. The closer this metric aligns with the accuracy of an unaltered –clean– model, the more effective the backdoor attack. The attack success rate, on the other hand, evaluates how accurately backdoored models respond to backdoored data. We construct a backdoored test dataset by inserting triggers into the entirety of the clean test dataset and reassigning the label to our target value (i.e., 0), and then use this dataset to evaluate the backdoored model. An attack success rate of 100% represents a perfect backdoor attack’s performance. Finally, in the ICL scenario, given that the count of examples is constrained, we ensure that the backdoored dataset excludes any inputs whose original label coincides with the target label. This aims to maximize the performance of the backdoor attack in the ICL settings. Furthermore, acknowledging the influence of demonstration order on ICL performance (Zhao et al., 2021), we adopt two separate poisoning approaches for ICL. In the first approach, we poison sentences at the start of the prompt, and in the second, we target sentences at the prompt’s end. 5.3 EVALUATION SETTINGS We follow the same evaluation settings as the one of membership inference (Section 3.3), but with the added step involving the creation of a backdoored fine-tuning dataset before initiating model training. We construct the backdoored fine-tuning dataset as follows: For each selected clean sentence, we introduce the trigger word “Hikigane” (which translates to “trigger” in Japanese) at its beginning and adjust its associated label to class 0. These modified sentences are then added to the clean fine-tuning dataset without removing any original samples. We assess the backdoor attack across varying poisoning rates. Specifically, for LoRA and SPT, the poisoning rate ranges between 0.1 and 0.75. For ICL, given that we use only four demonstrations, we examine scenarios with 1, 2, or 3 poisoned demonstrations, resulting in poisoning rates of 0.25, 0.5, and 0.75, respectively. 5.4 RESULTS We first assess the backdoor attack across varying poisoning rates using the three datasets: DBPedia, AGNews, and TREC with the GPT2-XL model. The results are illustrated in Figure 8. From our preliminary experiments, we decided to omit the SST-2 dataset. Since its binary structure, when subjected to a backdoor, substantially reduced the model utility across all adaptation methods. As anticipated, for LoRA and SPT, an increase in the poisoning rate boosts the attack success rate (ASR) of the backdoor attack. This rise can be attributed to the model’s improved trigger recall as it encounters more backdoored data during the fine-tuning. Conversely, the utility of the backdoored model sees a minor decline as the poisoning rate grows, as shown in Figure 9. This could be a result of the model slightly overfitting to the backdoored pattern, possibly weakening the connection between clean sentences and their designated classes. Conversely, In-Context Learning (ICL) shows minimal variation in performance as the poison rate increases, consistently approximating random guessing. We speculate that the limited number of demonstrations might cause this, making the model rely more on its inherent knowledge rather than the backdoored new input. Kandpal et al. (2023) explores a situation where backdooring takes place before model adaptation through ICL, i.e., the model is first fine-tuned with backdoored data. Their findings indicate robust backdoor performance, even in the absence of backdoored demonstrations. This aligns with our hypothesis that ICL models draw more from their inherent knowledge than from the few provided demonstrations. Figure 10: Comparison of attack success rates at various poison rates for DBPedia models. Figure 11: Backdoor attack performance when poisoning the first or the last demonstration in the prompt. The baseline indicates random guessing performance for the –target– label 0. Our observation extends to models of varying sizes. As shown in Figure 10, ICL exhibits an ASR close to random guessing across all three models, while SPT and LoRA consistently outperform ICL by a significant margin. We further validate the transferability of our conclusion for different target labels, as shown in Appendix C. Finally, we investigate whether poisoning either the first or the demonstration in the prompt yields a noticeable difference. To this end, we independently poison the first and last demonstration in the prompt and plot the results in Figure 11. The results indicate a marginal increase in attack success rate when the initial sentence is poisoned, even though the variation is minimal. These results show that the location of poisoned data within the prompt does not substantially influence the effectiveness of the backdooring approach in the context of ICL. 6 DISCUSSION AND LIMITATIONS While we recognize that more advanced attacks could target Language Models (LLMs), especially in pretrained or full fine-tuning scenarios, our study serves as an empirical lower bound for evaluating vulnerabilities across diverse LLM adaptation techniques. Our findings highlight the inherent vulnerabilities of these techniques to a variety of threats, emphasizing the pressing need for robust defenses in such settings. To the best of our knowledge, the majority of defenses against privacy and security threats are tailored for full fine-tuning scenarios. However, we believe that the core of these defenses can be adapted to the LLM adaptation techniques. For instance, recent works have successfully extended differential privacy, a well-established defense with guarantees against membership inference attacks, to ICL settings (Panda et al., 2023; Duan et al., 2023b; Tang et al., 2023). Moving forward, we intend to adapt these defenses to the LLM adaptation techniques and assess their efficacy against the presented attacks. 7 CONCLUSION In this study, we have systematically investigated the vulnerabilities of existing adaptation methods for Large Language Models (LLMs) through a three-fold assessment that encompasses both privacy and security considerations. Our findings reveal three key insights into the security and privacy aspects of LLM adaptation techniques. Firstly, In-Context Learning (ICL) emerges as the most vulnerable to membership inference attacks (MIAs), underscoring the need for enhanced privacy defenses in the implementation of this technique. Secondly, our study reveals a pervasive vulnerability across all three training paradigms to model stealing attacks. Intriguingly, the use of GPT3.5-generated data demonstrates a strong performance in such attacks, highlighting the ease with which fine-tuned LLMs can be stolen or replicated. Lastly, with respect to backdoor attacks, our results indicate that Low-Rank Adaptation (LoRA) and Soft Prompt Tuning (SPT) exhibit a higher susceptibility, whereas ICL proves to be less affected. These insights emphasize the necessity for tailored defenses in the deployment of LLM adaptation techniques. Moreover, they underscore each technique’s vulnerabilities, alerting users to the potential risks and consequences associated with their use. REFERENCES Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-Rank Adaptation of Large Language Models. In *International Conference on Learning Representations (ICLR)*, 2022. Brian Lester, Rami Al-Rfou, and Noah Constant. The Power of Scale for Parameter-Efficient Prompt Tuning. In *Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 3045–3059. ACL, 2021. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate Before Use: Improving Few-shot Performance of Language Models. In *International Conference on Machine Learning (ICML)*, pages 12697–12706. PMLR, 2021. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language Models are Unsupervised Multitask Learners. *OpenAI blog*, 2019. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. LLaMA: Open and Efficient Foundation Language Models. *CoRR abs/2302.13971*, 2023. Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level Convolutional Networks for Text Classification. In *Annual Conference on Neural Information Processing Systems (NIPS)*, pages 649–657. NIPS, 2015. Xin Li and Dan Roth. Learning Question Classifiers. In *International Conference on Computational Linguistics (COLING)*. ACL, 2002. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In *International Conference on Learning Representations (ICLR)*, 2019. Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, and Reza Shokri. Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks. In *Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 8332–8347. ACL, 2022. Sorami Hisamoto, Matt Post, and Kevin Duh. Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System? *Transactions of the Association for Computational Linguistics*, 2020. Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel. Extracting Training Data from Large Language Models. In *USENIX Security Symposium (USENIX Security)*, pages 2633–2650. USENIX, 2021. Xiaoyi Chen, Ahmed Salem, Michael Backes, Shiqing Ma, Qingni Shen, Zhonghai Wu, and Yang Zhang. BadNL: Backdoor Attacks Against NLP Models with Semantic-preserving Improvements. In *Annual Computer Security Applications Conference (ACSAC)*, pages 554–569. ACSAC, 2021. Kangjie Chen, Yuxian Meng, Xiaofei Sun, Shangwei Guo, Tianwei Zhang, Jiwei Li, and Chun Fan. BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models. In *International Conference on Learning Representations (ICLR)*, 2022. Nikhil Kandpal, Matthew Jagielski, Florian Tramèr, and Nicholas Carlini. Backdoor Attacks for In-Context Learning with Language Models. *CoRR abs/2307.14692*, 2023. Haanon Duan, Adam Dziedzic, Mohammad Yaghini, Nicolas Papernot, and Franziska Boenisch. On the Privacy Risk of In-context Learning. In *Workshop on Trustworthy Natural Language Processing (TrustNLP)*, 2023a.
CQF8mTF7qx
As discussed above, how far could you get without making the assumptions about (1) high data dimension and (2) third derivative positivity? Let's say we didn't care about proving global convergence, or global convergence rate, and just cared about understanding the structure of the flattest global minimizers.
SIMPLICITY BIAS VIA GLOBAL CONVERGENCE OF SHARPNESS MINIMIZATION Anonymous authors Paper under double-blind review ABSTRACT The remarkable generalization ability of neural networks is usually attributed to the implicit bias of SGD, which often yields models with lower complexity using simpler (e.g. linear) and low-rank features [Huh et al., 2021]. Recent works have provided empirical and theoretical evidence for the bias of particular variants of SGD (such as label noise SGD) toward flatter regions of the loss landscape. Despite the folklore intuition that flat solutions are ‘simple’, the connection with the simplicity of the final trained model (e.g. low-rank) is not well understood. In this work, we take a step toward bridging this gap by studying the simplicity structure that arises from minimizers of the sharpness for a class of two-layer neural networks. We show that, for any high dimensional training data and certain activations, with small enough step size, label noise SGD always converges to a network that replicates a single linear feature across all neurons; thereby implying a simple rank one feature matrix. To obtain this result, our main technical contribution is to show that label noise SGD always minimizes the sharpness on the manifold of models with zero loss for two-layer networks. Along the way, we discover a novel property — a local geodesic convexity — of the trace of Hessian of the loss at approximate stationary points on the manifold of zero loss, which links sharpness to the geometry of the manifold. This tool may be of independent interest. 1 INTRODUCTION Overparameterized neural networks trained by stochastic gradient descent (SGD) have demonstrated remarkable generalization ability. The emergence of this ability, even when the network perfectly fits the data and without any explicit regularization, still remains a mystery [Zhang et al., 2017]. A line of recent works have attempted to provide a theoretical basis for such observations by showing that SGD has an implicit bias toward functions with “low complexity” among all possible networks with zero training loss [Neyshabur et al., 2017; Soudry et al., 2018; Gunasekar et al., 2017; Li et al., 2018; Gunasekar et al., 2018; Woodworth et al., 2020; Li et al., 2019; HaoChen et al., 2021; Pesme et al., 2021]. In particular, it is noted that SGD tends to pick simpler features to fit the data over more complex ones [Hermann & Lampinen, 2020; Kalimeris et al., 2019; Neyshabur et al., 2014; Pezeshki et al., 2021]. This tendency is typically referred to as simplicity bias. Notably, [Huh et al., 2021] empirically observes that the feature matrix on the training set tends to become low-rank and that this property is amplified in deeper networks. On the other hand, recently, a different form of implicit bias of SGD based on the sharpness of loss landscape has garnered interest. The study of flatness of the loss landscape and its positive correlation with generalization is not new (e.g. [Hochreiter & Schmidhuber, 1997]), and has been supported by a plethora of empirical and theoretical evidence [Keskar et al., 2016; Dzunagaitė & Roy, 2017; Jastrzębski et al., 2017; Neyshabur et al., 2017; Wu et al., 2018; Jiang et al., 2019; Blanc et al., 2019; Wei & Ma, 2019a,b; HaoChen et al., 2020; Foret et al., 2021; Damian et al., 2021; Li et al., 2021; Ma & Ying, 2021; Ding et al., 2022; Nacson et al., 2022; Wei et al., 2022; Lyu et al., 2022; Norton & Royset, 2021]. It has been suggested that, after reaching zero loss, neural networks trained using some variants of SGD are biased toward regions of the loss landscape with low sharpness [Blanc et al., 2019; Damian et al., 2021; Li et al., 2021; Arora et al., 2022; Lyu et al., 2022; Li et al., 2022a; Wen et al., 2022; Bartlett et al., 2022]. In particular, [Blanc et al., 2020] discovered the sharpness-reduction implicit bias of a variant of SGD called label noise SGD, which explicitly adds noise to the labels. They proved that SGD locally diverges from points that are not stationary for the trace of the Hessian of the loss after fitting the training data. Along this line, Li et al. (2021) show that in the limit of step size going to zero, label noise SGD converges to a gradient flow according to the trace of Hessian of the loss after reaching almost zero loss. This flow can be viewed as a Riemannian gradient flow with respect to the sharpness on the manifold of zero loss, i.e., the flow following the gradient of sharpness projected onto the tangent space of the manifold. Furthermore, a standard Bayesian argument based on minimum description length suggests such flat minimizers correspond to “simple” models. From a conceptual point of view, both simplicity bias and sharpness minimization bias suggest that SGD learns simple solutions; however, the nature of "simplicity" is different. Simplicity bias is defined in the feature space, meaning the model learns simple features, e.g., low-rank features, but low sharpness is defined in the parameter space, meaning the loss landscape around the learned parameter is flat, such that the model is resilient to random parameter perturbations and admits a short and simple description from a Bayesian perspective. It is thus an interesting question whether these two notions of “simplicity” for neural networks have any causal relationship. Existing works that attempt to study this connection are restricted to somewhat contrived settings (e.g., Blanc et al. (2020), Shaltue et al. (2018), Szegedy et al. (2016), Shaltue et al. (2018), HaoChen et al. (2021)). For instance, Blanc et al. (2020) study 1-D two-layer ReLU networks and two-layer sigmoid networks trained on a single data point, where they show the stationary points of the trace of Hessian on the manifold of zero loss are solutions that are simple in nature. They further prove that label noise SGD locally diverges from non-stationary points. As another example, Li et al. (2021) and Vivien et al. (2022) independently show that label noise SGD can recover the sparse ground truth, but only for a simple overparameterized quadratic linear model. In this paper, we study the following fundamental question: (1) Is there a non-linear general setting where the sharpness-reduction implicit bias provably implies simplicity bias? However, even if we can show that a model with minimum flatness is a simple model, a key open question is whether the sharpness-reduction implicit bias, characterized by the riemannian gradient flow of sharpness in (Li et al., 2021), even successfully minimizes the sharpness in the first place. The convergence of the gradient flow so far is only known in very simple settings, i.e., quadratically overparametrized linear nets (Li et al., 2021). In this regard, a central open question in the study of sharpness minimization for label noise SGD is: (2) Does label noise SGD converge to the global minimizers of the sharpness on the manifold of zero loss, and if it does, how fast does it converge? In this paper, we show sharpness regularization provably lead to simplicity bias for two-layer neural networks with certain activation functions. We further prove that the Riemannian gradient flow on the manifold linearly converges to the global minimizer of the sharpness for all initialization. To our best knowledge, this is the first global linear convergence result for the gradient flow of trace of Hessian on manifolds of minimizers. More formally, we consider the mean squared loss \( L(\theta) = \sum_{i=1}^{n} (\sum_{j=1}^{m} \phi(\theta_j^\top x_i) - y_i)^2 \) for a two-layer network model where the weights of the second layer are fixed to one and \( \{x_i, y_i\}_{i=1}^{n} \) are the training dataset.\(^1\) We consider label noise SGD on \( L \), that is, running gradient descent on the loss \( \sum_{i=1}^{n} (\sum_{j=1}^{m} \phi(\theta_j^\top x_i) - y_i + \zeta_{t,i})^2 \) at each step \( t \), in which independent Gaussian noise \( \zeta_{t,i} \sim \mathcal{N}(0, \sigma^2) \) is added to the labels. The following is the primary main result of our paper, which holds under Assumptions [1][2][3] and for sufficiently small learning rate. **Theorem 1** (Convergence to simple solutions from any initialization). Given any \( \epsilon > 0 \) and arbitrary initial parameter \( \theta[0] \), for label noise SGD with any noise variance \( \sigma^2 > 0 \) initialized at \( \theta[0] \), there is a sufficiently small learning rate \( \eta \) such that running label noise SGD with step size \( \eta \) and \( T = \tilde{O}(\log(1/\epsilon) \cdot \eta^{-2} \sigma^{-2}) \), with probability at least \( 1 - \epsilon \), 1. \( L(\theta[T]) \leq \epsilon; \) 2. \( \text{Tr}(D^2 L(\theta[T])) \leq \inf_{\theta: L(\theta) = 0} \text{Tr}(D^2 L(\theta)) + \epsilon; \) \(^1\)We discuss how our result generalizes to the case of unequal weights at the end of Section 4.2 3. For all neurons $\theta_j[T]$ and all data points $x_i$, \[ |\theta_j[T]^\top x_i - \phi^{-1}(y_i/m)| \leq \epsilon, \] where $D^2\mathcal{L}$ is the Hessian of the loss and we hide constants about loss $\mathcal{L}$ and initialization $\theta[0]$ in $O(\cdot)$. Informally, Theorem 1 states that, first, label noise SGD reaches zero loss and then (1) successfully minimizes the sharpness to its global optima (2) and more importantly, it recovers the simplest possible solution that fits the data, in which the feature matrix is rank one (see Appendix C.3 for proof details). In particular, Theorem 1 holds for pure label noise SGD without any additional projection. Note that if we initialize the weights at zero, then because the update of label noise SGD is always in the span of the data points, the weights remain in the data span as well. On the other hand, property 3 in Theorem 1 states that the dot product of all the neurons to each single data point converge to each other, which means all the neurons collapse into the same vector. Our technical contributions cover the following two aspects: • **Simplicity bias:** We prove that at every minimizer of the trace of Hessian, the pre-activation of different neurons becomes the same for every training data. In other words, all of the feature embeddings of the dataset with respect to different neurons collapse into a single vector, e.g., see Figure 1a. This combined with our convergence result mathematically proves that the low-rank feature observation by [Huh et al., 2021] holds in our two-layer network setting trained by sharpness minimization induced regularizer algorithms, such as label noise SGD and 1-SAM. • **Convergence analysis:** Under an additional normality assumption on the activation (Assumption 2), we show that the gradient flow of the trace of Hessian converges exponentially fast to the global optima on the manifold of zero loss, and as a result, label noise SGD with a sufficiently small step size successfully converges to the global minimizer of trace of Hessian among all the models achieving zero loss. Importantly, we do not assume additional technical conditions (such as PL inequality and non-strict saddle property used in prior works) about the landscape of the loss or its sharpness, and our constants are explicit and only depend on the choice of the activation and coherence of the data matrix. Moreover, our convergence results hold in the strong sense i.e., the convergence holds for the last iterations of label noise SGD. The novelty of our approach (after that we show it converges to zero loss) is that we characterize the convergence on the manifold of zero loss in two phases (1) the first phase where the algorithm is far from stationary points, trace of Hessian decreases with a constant rate and there is no convexity in trace of Hessian, (2) the second phase where the algorithm gets close to stationarity, we prove a novel g-convexity property of trace of Hessian which holds only locally on the manifold, but we show implies an exponentially fast convergence rate by changing the Lyapunov potential from value of trace of Hessian to the norm squared of its gradient on the manifold. We further prove that approximate stationary points are physically close to a global optimum via a semi-monotonicity property. Interestingly, we observe this simplicity bias even beyond the regime of our theory; namely when the number of data points exceeds the ambient dimension, instead of a single vector, the feature embeddings of the neurons cluster and collapse into a small set of vectors. (see Figure 1b) 2 RELATED WORK **Implicit Bias of Sharpness Minimization.** Recent theoretical investigations, including those by [Blanc et al., 2019], [Damian et al., 2021], and [Li et al., 2021], have indicated that Stochastic Gradient Descent (SGD) with label noise intrinsically favors local minimizers with smaller Hessian traces. This is under the assumption that these minimizers are connected locally as a manifold and these analyses focus on the final phase of training when iterates are close to the manifold of minimizes. The effect of label noise or mini-batch noise in the central phase of training, i.e., when the loss is still substantially above zero is more difficult and is only known for simple quadratically over-parametrized linear models [Vivien et al., 2022; Andriushchenko et al., 2023b]. Further, [Arora et al., 2022] demonstrated that normalized Gradient Descent (GD) intrinsically penalizes the Hessian’s largest eigenvalue. [Ma et al., 2022] proposes that such sharpness reduction phenomena could also be triggered by a multi-scale loss landscape. In the context of scale-invariant loss functions, [Lyu et al., 2022] found that GD with weight decay implicitly reduces the spherical sharpness, defined as the Hessian’s largest eigenvalue evaluated at the normalized parameter. Another line of research examines the sharpness minimization effect of large learning rates, assuming that (stochastic) gradient descent converges at the end of training. This analysis primarily hinges on the concept of linear stability, as referenced in works by Wu et al. (2018), Cohen et al. (2021), Ma & Ying (2021), and Cohen et al. (2022). More recent theoretical analyses, such as those by Damian et al. (2022) and Li et al. (2022b), suggest that the sharpness minimization effect of large learning rates in gradient descent does not necessarily depend on the convergence assumption and linear stability. Instead, they propose a four-phase characterization of the dynamics in the so-called Edge of Stability regime, as detailed in Cohen et al. (2021). Sharpness-aware Minimization (SAM, Foret et al. (2021)) is a highly effective regularization method that improves generalization by penalizing the sharpness of the landscape. SAM was independently developed in Zheng et al. (2021) Norton & Royset (2021). Recently, Wen et al. (2022); Bartlett et al. (2022) proved that SAM successfully minimizes the worst-direction sharpness under the assumption that the minimizers of training loss connect as a smooth manifold. Wen et al. (2022) also analyze SAM with batch size 1 for regression problems and show that the implicit bias of SAM becomes penalizing average-direction sharpness, which is approximately equal to the trace of the Hessian of the training loss. Andriushchenko et al. (2023a) observe that the SAM update rule for ReLU networks is locally biased toward sparsifying the features leading to low-rank features. They further confirm empirically that deep networks trained by SAM are biased to produce low-rank features. 3 Main Results Problem Setup. In this paper, we focus on the following two-layer neural network model: \[ r_{\theta,\text{NN}}(x) \triangleq \sum_{j=1}^{m} \phi(\theta_j^\top x). \] Here \( \theta = (\theta_1, \ldots, \theta_m) \) and \( \phi \) are the set of parameters and activation of the neural network, respectively. As illustrated in equation 1, we assume that all the second-layer weights are equal to one. At the end of section 4.2, we discuss how our approach in Theorem 2 generalizes to any fixed choice of weights in the second layer. We denote the matrix of the weights by \( \Theta \in \mathbb{R}^{m \times d} \), whose \( j \)-th row is \( \theta_j \). Given a training dataset \( \{(x_i, y_i)\}_{i=1}^{n} \), we study the mean squared loss \[ L(\theta) \triangleq \sum_{i=1}^{n} (r_{\theta,\text{NN}}(x_i) - y_i)^2. \] For simplicity, we assume that the data points have unit norm, i.e. \( \forall i \in [n], \|x_i\| = 1 \). Let \( M \) denote the zero level set of \( L \): \( M \triangleq \{ \theta \in \mathbb{R}^{md} | L(\theta) = 0 \} \). As we show in Lemma 16 in the Appendix, \( M \) is well-defined as a manifold due to a mild non-degeneracy condition implied by Assumption 1. We equip \( M \) with the standard Euclidean metric in \( \mathbb{R}^{md} \). The starting point of our investigation is the work [Li et al., 2021], which characterizes the minimizer of the trace of the Hessian of the loss, \( \text{Tr} D^2 L \), as the implicit bias of SGD by showing that, in the limit of step size going to zero, label noise SGD evolves on the manifold of zero loss according to the following deterministic gradient flow: \[ \frac{d}{dt} \theta(t) \triangleq -\nabla \text{Tr} D^2 L(\theta(t)). \] (3) Here, \( \nabla \) is the Riemannian gradient operator on the manifold \( M \), which is the projection of the normal Euclidean gradient onto the tangent space of \( M \) at \( \theta(t) \). Note that starting from a point \( \theta(0) \) on \( M \), \( \theta(t) \) remains on \( M \) for all times \( t \) (for a quick recap on basic notions in Differential Geometry such as covariant derivative and gradient on a manifold, we refer the reader to Appendix I). On a manifold, similar to Euclidean space, having a zero gradient \( \| \nabla \text{Tr} D^2 L(\theta^*) \| = 0 \) (the gradient defined on the manifold) at some points \( \theta^* \) is a necessary condition for being a local minimizer, or equivalently the projection of Euclidean gradient onto the tangent space at \( \theta^* \) has to vanish. More generally, we call \( \theta \) an \( \epsilon \)-stationary point on \( M \) if \( \| \nabla \text{Tr} D^2 L(\theta) \| \leq \epsilon \). ### 3.1 Characterization of Stationary Points Before characterizing the stationary points of \( \text{Tr} D^2 L \), we introduce an important assumption on the activation function, namely the convexity and positivity of its derivative. **Assumption 1** (Strict positivity and convexity of \( \phi' \)). For all \( z \in \mathbb{R} \), \( \phi'(z) \geq \varrho_1 > 0 \), \( \phi'''(z) \geq \varrho_2 > 0 \). For example, Assumption 1 is satisfied for \( \phi(x) = x^3 + \nu x \) with constants \( \varrho_1 = \nu, \varrho_2 = 1 \). In Lemma 15, given the condition \( \phi''' > 0 \) we show that \( \theta(t) \) remains in a bounded domain along the gradient flow, which means having the weaker assumption that \( \phi', \phi''' > 0 \) automatically implies Assumption 1 for some positive constants \( \varrho_1 \) and \( \varrho_2 \). Another activation of interest to us that happens to be extensively effective in NLP applications is the cube (\( x^3 \)) activation [Chen & Manning, 2014]. Although \( x^3 \) does not satisfy the strict positivity required in Assumption 1, we separately prove Theorem 2 for \( x^3 \) (see Lemma 18 in the appendix). Next, let \( X \triangleq (x_1 | \ldots | x_n) \) be the data matrix. We make a coherence assumption on \( X \) which requires the input dimension \( d \) to be at least as large as the number of data points \( n \). **Assumption 2** (Data matrix coherence). The data matrix \( X \) satisfies \( X^\top X \geq \mu I \). Assumption 1 implies that \( M \) is well-defined as a manifold (see Lemma 16 for a proof). Under aforementioned Assumptions 1 and 2, we show that the trace of the Hessian regularizer has a unique stationary point on the manifold. **Theorem 2** (First-order optimal points). Under Assumptions 1 and 2, the first order optimal points and global optimums of \( \text{Tr} D^2 L \) on \( M \) coincide and are equal to the set of all \( \theta^* = [\theta^*_j]_{j=1}^m \) such that for all \( i \in [n] \) and \( j \in [m] \) satisfy the following: \[ \theta^*_j^\top x_i = \phi^{-1}(y_i/m). \] (4) Note that instead of Assumption 2 just having the linear independence of \( \{x_i\}_{i=1}^n \) suffices to prove Theorem 2, as pointed out in section 4.2. In particular, Theorem 2 means that the feature matrix is rank one at a global optimum \( \theta^* \), which together with our main convergence result Theorem 1 proves the low-rank bias conjecture proposed by [Huh et al., 2021] in the two-layer network setting for label noise SGD and 1-SAM. Notice the \( 1/m \) effect of the number of neurons on the global minimizers of the implicit bias in equation 4. While Theorem 2 concerns networks with equal second-layer weights, the case of unequal weights can be analyzed similarly, as we discuss after the proof of Theorem 2 in Section 4.2. 3.2 Convergence Rate Next, to state our convergence result, we introduce the key $\beta$-normality assumption, under which we bound the rate of convergence of the trace of Hessian to its global minimizer. **Assumption 3 ($\beta$-normality).** For all $z \in \mathbb{R}$ the second derivative of $\phi(z)$ can be bounded by the first and third derivatives as $$\beta \phi''(z) \leq \phi'^2(z)\phi'''(z).$$ An example class of activations that satisfies both Assumptions 1 and 3 is of the form $\phi(z) = z^{2k+1} + \nu z$ for $\nu > 0$, which is well-known in the deep learning theory literature [Li et al., 2018; Jelassi et al., 2022; Allen-Zhu & Li, 2020b; Woodworth et al., 2020]. Notably, these activations satisfy Assumption 3 with normality coefficient $\beta = \min\{\frac{1}{(2k+1)^2}, \nu^2\}$. Under Assumptions 1, 2, and 3, we show that $\theta(t)$ converges to $\theta^*$ exponentially fast in Theorem 3. This results from strong local $g$-convexity of trace of Hessian for approximate stationary points (see Lemmas 4 and 10), plus a semi-monotonicity property for the trace of Hessian (Lemma 6). **Theorem 3 (Convergence of the gradient flow).** Consider the limiting flow of Label noise SGD on the manifold of zero loss, which is the gradient flow in equation 5. Then, under Assumptions 1, 2, and 3, (C.1) the gradient flow reaches $\epsilon$-stationarity for all times $$t \geq \frac{\text{Tr} D^2 L(\theta(0))}{\mu \beta^2} + \log\left(\frac{\beta^2 / (\rho_1^2 \rho_2^2 \epsilon^2)}{\rho_1 \rho_2 \mu}\right).$$ (C.2) For all $j \in [m]$ and $i \in [n]$, the dot product of the $j$th neuron to the $i$th data point becomes $\epsilon$-close to that of any global optimum $\theta^*$: $$|\theta_j(t)^\top x_i - \theta_j^*^\top x_i| \leq \epsilon,$$ where $\theta^*$ is defined in Theorem 2. The formal proof of Theorem 3 is given in Section C.3. We sketch its proof in Section 4.3. Note that the rate is logarithmic in the accuracy parameter $\epsilon$, i.e., the flow has exponentially fast convergence to the global minimizer. 4 Proof Sketches Before going into the technical details of the proofs, we discuss some necessary background and additional notation used in the proofs. 4.1 Additional Notation We use $f_i(\theta)$ to denote the output of the network on the $i$th input $x_i$, i.e. $$f_i(\theta) \triangleq r_{\theta,\text{NN}}(x_i).$$ We use $f(\theta) \triangleq (f_1(\theta), \ldots, f_n(\theta))$ to denote the array of outputs of the network on $\{x_i\}_{i=1}^n$. We use $D$ for the Euclidean directional derivative or Euclidean gradient, and $\nabla$ for the covariant derivative on the manifold or the gradient on the manifold. We denote the Jacobian of $f$ at point $\theta$ by $Df(\theta)$ whose $i$th row is $Df_i(\theta)$, the Euclidean gradient of $f_i$ at $\theta$. Recall the definition of the manifold of zero loss as the zero level set of the loss $L$, which is the intersection of zero level sets of functions $f_i, \forall i \in [m]: M = \{\theta \in \mathbb{R}^{md} | \forall i \in [n], f_i(\theta) = y_i\}$. The tangent space $T_\theta(M)$ of $M$ at point $\theta$ can be identified by the tangents to all curves on $M$ passing through $\theta$, and the normal space $T_\theta^N(M)$ in this setting is just the orthogonal subspace of $T_\theta(M)$. We denote the projection operators onto $T_\theta(M)$ and $T_\theta^N(M)$ by $P_\theta$ and $P_\theta^N$, respectively. For a set of vectors $\{v_i\}_{i=1}^n$ where for all $i \in [n], v_i \in \mathbb{R}^d$, we use the notation $\begin{bmatrix} v_i \end{bmatrix}_{i=1}^n$ to denote the vector in $\mathbb{R}^{nd}$ obtained by stacking vectors $\{v_i\}_{i=1}^n$. 4.2 Proof Sketch of Theorem 2 In this section, we describe the high-level proof idea of Theorem 2. Note that Theorem 2 characterizes the first order stationary points of \( \text{Tr} D^2 L \) on \( M \), i.e., points \( \theta^* \) on \( M \) for which \( \nabla \text{Tr} D^2 L(\theta^*) = 0 \). The starting point of the proof is the observation that the gradients \( Df_i(\theta) \) form a basis for the normal space at \( \theta \). **Lemma 1** (Basis for the normal space). For every \( \theta \), the set of vectors \( \{Df_i(\theta)\}_{i=1}^n \) form a basis for the normal space \( T_{\theta}^N(M) \) of \( M \) at \( \theta \). For a stationary point \( \theta^* \) with \( \nabla F(\theta^*) = 0 \), the Euclidean gradient at \( \theta^* \) should be in the normal space \( T_{\theta^*}^N(M) \). But by Lemma 1, this means there exist coefficients \( \{\alpha_i\}_{i=1}^n \) such that \[ \text{Tr} D^2 L(\theta^*) = \sum_{i=1}^n \alpha_i Df_i(\theta^*). \] (7) To further understand what condition equation (7) means for our two-layer network equation (1), we first state a result from prior work in Lemma 2 (see e.g., Blanc et al., 2020) regarding the trace of the Hessian of the mean squared loss \( L \) for some parameter \( \theta \in M \) (see Section D.2 for a proof). **Lemma 2.** For the loss on the dataset defined in equation 2 for \( \theta \) with \( L(\theta) = 0 \), we have \[ \text{Tr} D^2 L(\theta) = \sum_{i=1}^n \|Df_i(\theta)\|^2. \] Using Lemma 2, we can explicitly calculate the trace of Hessian for our two-layer network in equation (1). **Lemma 3** (Trace of Hessian in two-layer networks). For the neural network model equation (7) and the mean squared loss \( L \) in equation (2), we have \[ \text{Tr} D^2 L(\theta) = \sum_{i=1}^n \sum_{j=1}^m \phi'(\theta_j^\top x_i)^2. \] (8) Now we are ready to prove Theorem 2. **Proof of Theorem 2.** From equation (7) and by explicitly calculating the gradients \( Df_i(\theta^*) \)'s using Lemma 3, we have \[ \left[ \sum_{i=1}^n 2\phi'(\theta_j^\top x_i)\phi''(\theta_j^\top x_i)x_i \right]_{j=1}^m = \sum_{i=1}^n \alpha_i \left[ \phi'(\theta_j^\top x_i)x_i \right]_{j=1}^m. \] But using our assumption that the data points \( \{x_i\}_{i=1}^n \) are linearly independent, we have for all \( i \in [n] \) and \( j \in [m] \), \( \phi''(\theta_j^\top x_i) = \alpha_i \). Now, because \( \phi'' \) is positive and \( \phi'' \) is strictly monotone, its inverse is well-defined: \[ \theta_j^\top x_i = \nu_i \triangleq \phi''^{-1}(\alpha_i), \] (9) This implies \( \phi(\theta_j^\top x_i) = \phi(\nu_i) \). Therefore, \[ y_i = \sum_{j=1}^m \phi(\theta_j^\top x_i) = m\phi(\nu_i). \] (10) But note that from strict monotonicity of \( \phi \) from Assumption 1, we get that it is invertible. Therefore, equation (10) implies \[ \nu_i = \phi^{-1}(y_i/m), \quad \text{and} \quad \alpha_i = \phi''(\phi^{-1}(y_i/m)). \] This characterizes the first order optimal points of \( \text{Tr} D^2 L \). On the other hand, note that \( \text{Tr} D^2 L \geq 0 \), so its infimum over \( M \) is well-defined. Moreover, the fact that the Jacobian \( Df(\theta) \) is non-degenerate at all points \( \theta \in M \) (Lemma 17) implies that \( M \) is also topologically closed in \( \mathbb{R}^{md} \), hence from the continuity of \( \text{Tr} D^2 L \) it achieves its infimum. But global minimizers are also stationary points and from equation (9) we see that all of the first order optimal points have the same value of \( \text{Tr} D^2 L \). Therefore, all first order optimal points are global optima as well, and they satisfy equation (9). \( \square \) Results for General Second Layer Weights: Note that if the weights of the second layer are fixed arbitrarily to \((w_j)_{j=1}^m\), then equation (9) simply turns into \( \theta_j^\top x_i = \phi'^{-1}(x_i/w_j) \); namely, the \( \phi'' \) image of the feature matrix \( \Theta^* X \) is rank one. Interestingly, this shows that in the general case, the right matrix to look for obtaining a low-rank structure is the embedding of the feature matrix by the second derivative of the activation (for the cube activation this embedding is the feature matrix itself.) Next, we investigate the convergence of the gradient flow in equation (3) to a global optimum \( \theta^* \). 4.3 Proof Sketch of Theorem 3 The first claim (C.1) of Theorem 3 is that after time \( t \geq \tilde{\Omega}(\log(1/\epsilon)) \), the gradient will always be smaller than \( \epsilon \). It is actually trivial to prove that there exists some time at most \( t = \tilde{O}(1/\epsilon^2) \) such that \( \| \nabla \text{Tr} D^2 L(\theta(t)) \| \leq \epsilon \) without any assumption, based on a standard argument which is a continuous version of descent lemma. Namely, if the gradient of \( \text{Tr} D^2 L \) remains larger than some \( \epsilon > 0 \) along the flow until time \( t > \text{Tr} D^2 L(\theta(0))/\epsilon^2 \), then we get a contradiction: \[ \text{Tr} D^2 L(\theta(0)) - \text{Tr} D^2 L(\theta(t)) = \int_0^t \| \nabla \text{Tr} D^2 L(\theta(t)) \|^2 > \text{Tr} D^2 L(\theta(0)). \] Our novelty here is a new characterization of the loss landscape under Assumptions 1, 2 (Lemma 4 proved in Appendix C.1). **Lemma 4 (PSD Hessian when gradient vanishes).** Suppose that activation \( \phi \) satisfies Assumptions 1, 2, and 3. Consider a point \( \theta \) on the manifold where the gradient is small, namely \( \| \nabla_\theta \text{Tr} D^2 L(\theta) \| \leq \sqrt{\mu \beta} \). Then, the Hessian of \( \text{Tr} D^2 L \) on the manifold is PSD at point \( \theta \), or equivalently \( \text{Tr} D^2 L \) is locally \( g \)-convex on \( M \) around \( \theta \). Lemma 4 implies that whenever the gradient is sufficiently small, the time derivative of the squared gradient norm will also be non-positive: \[ \frac{d}{dt} \| \nabla \text{Tr} D^2 L(\theta(t)) \|^2 = -\nabla \text{Tr} D^2 L(\theta(t))^\top \nabla^2 \text{Tr} D^2 L(\theta(t)) \nabla \text{Tr} D^2 L(\theta(t)) \leq 0, \] Therefore, once the gradient is sufficiently small, it will always remain small. In fact, we show that the Hessian of \( \text{Tr} D^2 L \) on the manifold is strictly positive with a uniform \( \varrho_1 \varrho_2 \mu \) lower bound in a suitable subspace which includes the gradient (Lemma 10). Then \[ \frac{d}{dt} \| \nabla \text{Tr} D^2 L(\theta(t)) \|^2 = -\nabla \text{Tr} D^2 L(\theta(t))^\top \nabla^2 \text{Tr} D^2 L(\theta(t)) \nabla \text{Tr} D^2 L(\theta(t)) \leq -\varrho_1 \varrho_2 \mu \| \nabla \text{Tr} D^2 L(\theta(t)) \|^2, \] which further implies a linear convergence of gradient norm: \[ \| \nabla \text{Tr} D^2 L(\theta(t)) \|^2 \leq \| \nabla \text{Tr} D^2 L(\gamma(\theta(t_0))) \|^2 e^{-(t-t_0)\varrho_1 \varrho_2 \mu}. \] Next, we state the high level ideas that we use to prove Lemma 4, which is the key to proving Theorem 3. Before that, we recall some necessary background from Differential Geometry. **Computing the Hessian on \( M \).** To prove Lemma 4, we need to calculate the Hessian of \( \text{Tr} D^2 L \) on the manifold. Note that the Hessian of a function \( F \) on \( \mathbb{R}^{md} \) can be defined as \( D^2 F[u,w] = \langle D(DF(\theta))[u], w \rangle \) where \( DF(\theta) \) is the usual Euclidean gradient of \( F \) at \( \theta \), and \( D(.)[u] \) denotes directional derivative. To calculate the Hessian on the manifold, one needs to substitute the Euclidean gradient \( DF(\theta) \) by the gradient \( \nabla F(\theta) \) on the manifold. Moreover, \( D(DF(\theta))[u] \) has to be substituted by the covariant derivative \( \nabla_u \nabla F(\theta) \), a different differential operator than the usual derivative. We recall the characterization of the covariant derivative as the projection of the conventional directional derivative onto the tangent space. For more background on covariant differentiation, we refer the reader to Appendix E. **Fact 1.** For vector fields \( V, W \) on \( M \), we have \( \nabla_V W(\theta) = P_\theta DW(\theta)[V] \). We also recall the definition of Hessian \( \nabla^2 F \) on \( M \) based on Covariant derivative. **Fact 2.** The Hessian of \( F \) at point \( \theta \) on \( M \) is given by \( \nabla^2 F(w,u) = \langle \nabla_w \nabla F, u \rangle \). We point out that on a general manifold the dot product \( \langle , \rangle \) in Fact 2 is with respect to the metric of the manifold. However, in the case of a hypersurface \( M \subseteq \mathbb{R}^{md} \), the metric is the same as that of the Euclidean chart \( \mathbb{R}^{md} \) that \( M \) is embedded in. Next, we explicitly calculate the Hessian of \( F \triangleq \text{Tr}D^2L \) on \( M \) in Lemma 8, which is the key in relating the norm of the gradient of \( \text{Tr}D^2L \) to its Hessian in proving Lemma 4, exploiting the formula that we derive in Lemma 5 in Appendix C for the Hessian of a general smooth function \( F \) over \( M \). Lemma 5 is proved in Appendix C.1.1. **Lemma 5 (Hessian of the implicit regularizer on the manifold).** Recall that \( \{Df_i(\theta)\}_{i=1}^n \) is a basis for the normal space \( T_N^\theta(M) \) according to Lemma 1. Let \( \alpha' = (\alpha'_i)_{i=1}^n \) be the coefficients representing \( P_N^\theta(D(\text{Tr}D^2L)(\theta)) \in T_N^\theta(M) \) in the basis \( \{Df_i(\theta)\}_{i=1}^n \), i.e. \[ P_N^\theta \left( D(\text{Tr}D^2L)(\theta) \right) = \sum_{i=1}^n \alpha'_i Df_i(\theta). \] Then, the Hessian of \( \text{Tr}D^2L \) on \( M \) can be explicitly written (in the Euclidean chart \( \mathbb{R}^{md} \)) using \( \alpha' \) as \[ \nabla^2\text{Tr}D^2L(\theta)[u,w] = D^2\text{Tr}D^2L(\theta)[u,w] - \sum_{i=1}^n \alpha'_i D^2 f_i(\theta)[u,w], \] for arbitrary \( u, w \in \mathbb{R}^{md} \), where recall that \( D^2 \) denotes the normal Euclidean Hessian while we use \( \nabla^2 \) for the Hessian over the manifold. Observe in the formula of \( \nabla^2\text{Tr}D^2L \) in equation 11, the first term is just the normal Euclidean Hessian of \( \text{Tr}D^2L \) while we get the second “projection term” due to the difference of covariant differentiation with the usual derivative. Finally, to prove the second claim (C.2) of Theorem 3, we prove Lemma 6 in Appendix D.5 which shows that the small gradient norm of \( \| \nabla \text{Tr}D^2L(\theta) \| \leq \delta \) implies the (approximate) alignment of features. **Lemma 6 (Small gradient implies close to optimum).** Suppose \( \| \nabla \text{Tr}D^2L(\theta) \| \leq \delta \). Then, for all \( i \in [n], j \in [m], \left| \theta_j^\top x_i - \theta_j^* \top x_i \right| \leq \delta / (\sqrt{\mu_1 \mu_2}), \) for \( \theta^* \) as defined in equation 4. ## 5 Conclusion In this paper, we take an important step toward understanding the implicit bias of label noise SGD with trace of Hessian induced regularizer for a class of two layer networks. We discover an intriguing relationship between this induced regularizer and the low-rank simplicity bias conjecture of neural networks proposed in Huh et al. (2021): we show that by initializing the neurons in the subspace of high-dimensional input data, all of the neurons converge into a single vector. Consequently, for the final model, (i) the rank of the feature matrix is effectively one, and (ii) the functional representation of the final model is very simple; specifically, its sub-level sets are half-spaces. To prove this, in spite of the lack of convexity, we uncover a novel structure in the landscape of the loss regularizer: the trace of Hessian regularizer becomes locally geodetically convex at points that are approximately stationary. Furthermore, in the limit of step size going to zero, we prove that label noise SGD or 1-SAM converge exponentially fast to a global minimizer. Generalizing the class of activations that enjoy fast convergence or proving the existence of fundamental barriers and handling the case of low-dimensional input data are interesting future directions. Based on the compelling empirical evidence from Huh et al. (2021) and our results for two-layer networks, we hypothesize that a low-rank simplicity bias can also be shown for deeper networks, possibly using the sharpness minimization framework. ## References Zeyuan Allen-Zhu and Yuanzhi Li. Backward feature correction: How deep learning performs deep learning. *arXiv preprint arXiv:2001.04413*, 2020a. Zeyuan Allen-Zhu and Yuanzhi Li. Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. *arXiv preprint arXiv:2012.09816*, 2020b. Maksym Andriushchenko, Dara Bahri, Hossein Mobahi, and Nicolas Flammarion. Sharpness-aware minimization leads to low-rank features. *arXiv preprint arXiv:2305.16292*, 2023a. Maksym Andriushchenko, Aditya Vardhan Varre, Loucas Pillaud-Vivien, and Nicolas Flammarion. Sgd with large step sizes learns sparse features. In *International Conference on Machine Learning*, pp. 903–925. PMLR, 2023b. Sanjeev Arora, Zhiyuan Li, and Abhishek Panigrahi. Understanding gradient descent on the edge of stability in deep learning. In *International Conference on Machine Learning*, pp. 948–1024. PMLR, 2022. Peter L Bartlett, Philip M Long, and Olivier Bousquet. The dynamics of sharpness-aware minimization: Bouncing across ravines and drifting towards wide minima. *arXiv preprint arXiv:2210.01513*, 2022. Guy Blanc, Neha Gupta, Gregory Valiant, and Paul Valiant. Implicit regularization for deep neural networks driven by an ornstein-uhlenbeck like process. *arXiv preprint arXiv:1904.09080*, 2019. Guy Blanc, Neha Gupta, Gregory Valiant, and Paul Valiant. Implicit regularization for deep neural networks driven by an ornstein-uhlenbeck like process. In *Conference on learning theory*, pp. 483–513. PMLR, 2020. Danqi Chen and Christopher D Manning. A fast and accurate dependency parser using neural networks. In *Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)*, pp. 740–750, 2014. Jeremy M. Cohen, Simran Kaur, Yuanzhi Li, J. Zico Kolter, and Ameet Talwalkar. Gradient descent on neural networks typically occurs at the edge of stability, 2021. Jeremy M Cohen, Behrooz Ghorbani, Shankar Krishnan, Naman Agarwal, Sourabh Medapati, Michal Badura, Daniel Suo, David Cardoze, Zachary Nado, George E Dahl, et al. Adaptive gradient methods at the edge of stability. *arXiv preprint arXiv:2207.14484*, 2022. Alex Damian, Tengyu Ma, and Jason D Lee. Label noise sgd provably prefers flat global minimizers. *Advances in Neural Information Processing Systems*, 34:27449–27461, 2021. Alex Damian, Eshaan Nichani, and Jason D Lee. Self-stabilization: The implicit bias of gradient descent at the edge of stability. *arXiv preprint arXiv:2209.15594*, 2022. Lijun Ding, Dmitriy Drusvyatskiy, and Maryam Fazel. Flat minima generalize for low-rank matrix recovery. *arXiv preprint arXiv:2203.03756*, 2022. Manfredo P Do Carmo. *Differential geometry of curves and surfaces: revised and updated second edition*. Courier Dover Publications, 2016. Gintare Karolina Dziugaite and Daniel M Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. *arXiv preprint arXiv:1703.11008*, 2017. Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In *International Conference on Learning Representations*, 2021. Khashayar Gatmiry and Santosh S Vempala. Convergence of the riemannian langevin algorithm. *arXiv preprint arXiv:2204.10818*, 2022. Khashayar Gatmiry, Jonathan Kelner, and Santosh S Vempala. Sampling with barriers: Faster mixing via lewis weights. *arXiv preprint arXiv:2303.00480*, 2023. Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Implicit regularization in matrix factorization. *Advances in Neural Information Processing Systems*, 30, 2017.
bQNiz6aid0
As far as I understood the Barren Plateau problem, the probability that the gradient vanishes grows exponentially in n, i.e., the variance of the gradient becomes exponentially small, i.e., O(2^⁻n). Authors claim their method achieves O(2^-n/2) and in some situations even O(1). This analysis only compares to the “naive” QNN and not to any of the other works trying to overcome the problem already?
QUANTUM SEQUENTIAL SCATTERING MODEL FOR QUANTUM STATE LEARNING Anonymous authors Paper under double-blind review ABSTRACT Learning probability distribution is an essential framework in classical learning theory. As a counterpart, quantum state learning has spurred the exploration of quantum machine learning theory. However, as dimensionality increases, learning a high-dimensional unknown quantum state via conventional quantum neural network approaches remains challenging due to trainability issues. In this work, we devise the quantum sequential scattering model (QSSM), inspired by the classical diffusion model, to overcome this scalability issue. Training of our model could effectively circumvent the vanishing gradient problem to a large class of high-dimensional target states possessing polynomial-scaled Schmidt ranks. Theoretical analysis and numerical experiments provide evidence for our model’s effectiveness in learning both physical and algorithmic meaningful quantum states and show an out-performance beating the conventional approaches in training speed and learning accuracy. Our work has indicated that an increasing entanglement, a property of quantum states, in the target states, necessitates a larger scaled model, which could reduce our model’s learning performance and efficiency. 1 INTRODUCTION The innovation of classical machine learning has brought significant convenience and efficiency in industry and society. In particular, learning distributions between individual events and data is one of the crucial tasks for multiple usages in decades [Anderson et al., 1977; Geng, 2016]. A plethora of approaches and schemes have been designed to learn probability distributions, such as continuous evolutionary algorithms [Hansen et al., 2015; Kern et al., 2004] and supervised learning within the neural network (NN) framework including Boltzmann machine, graph neural network and diffusion model [Baum & Wilczek, 1987; Franceschi et al., 2019; Hoogeboom et al., 2021]. Meanwhile, by the fast growth of the requirement on computational power, quantum computing, as a prospective new framework, is expected to provide advantages over classical technology. The remarkable achievements from classical machine learning models [LeCun et al., 2015; Serban et al., 2016] have spurred the generation of their counterparts within the field of quantum machine learning (QML) [Biamonte et al., 2017; Schuld et al., 2015; Lloyd et al., 2013; Schuld et al., 2014]. Quantum neural networks (QNNs) composed of layers of parametrised quantum circuits have received massive attention regarding various architectures addressing computation challenges [Rebentrost et al., 2018; Zhao et al., 2019; Cong et al., 2019], including quantum state learning. In quantum, the correlations between quantum data are encoded in the quantum states. Consequently, the task of learning an arbitrary quantum state bears a resemblance to classical distribution learning, which has inspired developments of state learning QML models [Chowdhury et al., 2020; Ghosh et al., 2019; Wang et al., 2021a]. As a main solution to quantum state learning, however, the implementation of the QNN-based methods suffers obstacles in efficiency, scalability and trainability. Specifically, training deep QNNs composed of multiple layers can experience exponentially vanishing gradients, or called barren plateaus (BP) [McClean et al., 2018] when targeting high-dimensional states. This work proposed a quantum sequential scattering model (QSSM) to overcome this bottleneck in QNN-powered state learning techniques. We provide both theoretical and numerical demonstrations of QSSM on training efficiency and learning accuracy, which can outperform the conventional QNN model using universal layers. Recent research on the trainability issue of QNNs indicates prospective directions by reducing the expressibility of QNN architectures (Cerezo et al., 2021; Liu et al., 2022a), adopting clever parameterization strategies (Grant et al., 2019; Kulshrestha & Safro, 2022; Volkoff & Coles, 2021; Friedrich & Maziero, 2022) and using adaptive algorithms (Grimsley et al., 2019; Zhang et al., 2021; Skolik et al., 2021; Grimsley et al., 2022). We drew inspiration from the classical diffusion model (Yang et al., 2022) by conducting the state learning with progressively augmenting sublevels in a sequential manner. Our model combines the ideas of quantum purification theory and adaptive and layerwise training (Quek et al., 2021; Skolik et al., 2021), for which the training process can be treated as the dilation of quantum information from subsystems to the entire one. The structure of the model ensures a dramatic reduction in the number of optimized parameters at each training step and, therefore, avoids barren plateaus for a large class of target states. 2 PRELIMINARIES 2.1 CLASSICAL DISTRIBUTION LEARNING We briefly introduce the formalism concerning classical probability distribution learning. Correlations between discrete data variables, denoted as $X$, can be characterized by some probability distributions $D$ (Kearns et al., 1994). The learning of such a distribution can be described as constructing a generator $G_D'$ that takes $x \in X$ as an argument and outputs $G_D'[x] \in X$ with respect to a distribution $D'$. The generator can be realized via a classical machine learning model, which is trained to achieve $d(D, D') \leq \varepsilon$ for some legal metric $d$, e.g., Kullback-Leibler divergence (Csiszar, 1975), and some threshold error $\varepsilon$. 2.2 QUANTUM STATE LEARNING A typical quantum state learning task for an unexplored target state $\rho$, as a density matrix, solves for a generator that can be efficiently constructed to produce a representation $\rho'$ which $D(\rho, \rho') \leq \varepsilon$ resembling classical distribution learning. Here $D$ is a feasible distance measure on matrix space. Such a generator can veritably produce $\rho'$ instead of numerically simulating it (Vidal, 2003) and can be repeatedly used in further computational tasks. This work focuses on the QNN-powered algorithms combining both classical and quantum computation. Utilizing parameterized quantum circuits working as the state generators that are trained by gradient descent or gradient-free methods to determine the optimal parameters (Peruzzo et al., 2014; Kandala et al., 2017). Beyond our scope, schemes using shadow tomography (Aaronson, 2018; Huang, 2022) fulfill another category of state learning with the aim of characterizing the classical information of quantum states. 2.3 QUANTUM COMPUTING & QNN LAYERS Our notations follow the conventional textbook by Nielsen and Chuang (Nielsen & Chuang, 2010). For more, we invite readers to have access to the supplementary material (Appendix A) for more details on quantum computing and quantum machine learning. Quantum information is encoded and processed via the fundamental cells, namely, qubits. An $n$-qubit state can be mathematically represented by a $2^n \times 2^n$ positive semi-definite density matrix $\rho$, i.e., $\rho \succeq 0$ over the complex field and $\text{Tr}[\rho] = 1$. A pure state, in this formulation, satisfy $\text{Rank}(\rho) = 1$ and can be expressed in Dirac bra-ket notation as $\rho = |\psi\rangle\langle\psi|$ where $|\psi\rangle \in \mathbb{C}^{2^n}$ denotes a Hilbert space unit column vector with the corresponding dual vector $\langle\psi|^\dagger = |\psi\rangle$ and $\dagger$ denoting the complex conjugate transpose operation. A mixed state satisfies $\text{Rank}(\rho) > 1$, and based on Spectral theorem, it has a decomposition form $\rho = \sum_j p_j |\psi_j\rangle\langle\psi_j|$ where $p_j > 0$ denotes the probability of observing $|\psi_j\rangle\langle\psi_j|$ in $\rho$ and $\sum_j p_j = 1$. The evolution of a quantum state $\rho$ is realized by applying a series of quantum gates which are mathematically described as unitary operators. The state $\rho'$ that undergoes transformation via a quantum gate $U$ can be obtained through direct matrix multiplication, expressed as $\rho' = U\rho U^\dagger$. Common single-qubit gates include the Pauli rotations $\{R_P(\theta) = e^{-i\frac{\theta}{2}P}|P \in \{X, Y, Z\}\}$, which... are in the matrix exponential form of Pauli matrices \[ X := \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad Y := \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad Z := \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \] (1) Multi-qubit gates, e.g., controlled-\(X\) gate \(CX\) (or CNOT) = \(I \oplus X\) and controlled-\(Z\) gate \(CZ = I \oplus Z\) where \(‘\oplus’\) denotes the direct sum operation live in high-dimensional linear operator space over \(C\). Quantum measurements working as projections are applied at the end of the quantum circuits. Quantum neural networks are usually formed by layers of parameterized circuits shown in Fig. 1 consisting of a bunch of single-qubit gates and several two-qubit gates. 3 Main Results In this paper, we design a quantum sequential scattering model (QSSM) absorbing the ideas of classical diffusion model and adaptive learning Qu et al. (2021), which has modular structured parametrised circuits, or we called the scattering layer, at each training step. Each layer ensures learning the reduced density matrix of a specific part in the target state so that the model can gradually rebuild the entire state after accomplishing all training steps. Our main contributions involve (1) conceptually proposing the idea of combining quantum information diffusion and adaptive quantum state learning, (2) technically devising a new quantum neural network model, namely QSSM and the state learning algorithm via a sequentially subsystem-learning strategy, (3) theoretically proving the effectiveness of the state learning algorithm and a polynomial-scaled gradient variance of QSSM which indicates an avoidance of barren plateaus for rank-restricted state learning, (4) numerically demonstrating our results on learning different quantum states involving the noise effects. We compare QSSM directly to the conventional QNN model for handling state learning tasks and showcase its enhancement in both training efficiency and learning accuracy. The main results are presented in the following sections. 3.1 Quantum Sequential State Compositing Quantum states are represented in a multiple-qubit system with a fixed order. We treat each qubit as a quantum register, just like classical bit and classical register, and label it \(q_k\) for the \(k\)-th register. We then define a special characteristic for quantum states. **Definition 1** Given an \(n\)-qubit quantum state \(\rho\) represented by \(n\) ordered quantum registers labeled as \(q_1, q_2, \ldots, q_n\), denoting \(\rho_k\) as the \(k\)-th reduced density matrix of the first \(k\)-register state, i.e., \(\rho_k = \text{Tr}_{q_{k+1}:q_n}[\rho]\) for \(1 \leq k \leq n\) where the operation \(\text{Tr}_{q_i:q_j}[\cdot]\) representing a partial tracing over registers \(q_i\) to \(q_j\), the (Schmidt) rank sequence of \(\rho\) is an ordered list \(R_\rho\), \[ R_\rho = \{r_1, r_2, \ldots, r_{n-1}, r_n\}, \] (2) where \( r_k \) indicates \( \text{Rank}[\rho_k] \). In particular, if \( \rho \) is pure, then \( r_n = 1 \) since \( \rho \) can be represented as \( |\phi\rangle\langle\phi| \) for some pure state vector \( |\phi\rangle \). With these clarified, we could then present our sufficient and necessary conditions for QSSM to completely learn a target state using Algorithm 1 provided enough training time and layer width. Our analysis will concentrate on the pure target state \( \rho \). However, the statement applies to the cases of mixed target states i.e., \( \text{Rank}[\rho] > 1 \), since we could equivalently learn its purification state by introducing auxiliary systems. The formal version of proposition 1 can be found in Appendix B. **Proposition 1** For a given \( n \)-qubit pure target state \( \rho \) represented by \( n \) ordered quantum registers \( q_1, q_2, \ldots, q_n \), if the rank sequence of \( \rho \) is \( R_\rho = \{r_1, r_2, \ldots, r_{n-1}, r_n\} \). Then there exists a quantum algorithm based on QSSM, that could produce a state \( \sigma \) exactly satisfying \( \sigma = \rho \), if and only if the \( k \)-th scattering layer \( U_k(\theta_k) \) of QSSM has a width \( w_k \) scales \( O(\lceil \log_2 r_k \rceil) \). We see that the width of each scattering layer scales only logarithmic regarding the target states’ rank sequence. In general, even the rank of quantum pure state scales \( O(2^{\lceil n/2 \rceil}) \), the logarithmic scaling in \( w_k \) still guarantees a linear growth in the requirement of layer width concerning the number of qubits \( n \), in the worst case. Moreover, though many quantum states have full rank, there is a polynomial number of dominant components in their spectral decomposition. Learning their low-rank approximation pre-determined by the quantum principal component analysis (QPCA) [Lloyd et al. (2014)] can be treated as a quantum compressing of unknown states, which still captures the main statistical behaviours of target states. With a certain error tolerance for the low-rank approximation, the layer width can be further reduced, leading to more advantages in QSSM state learning. In the Numerical Simulations (Section 5), we provide evidence of learning different states’ rank-restricted approximation. Compared to the \( n \)-qubit universal-QNN model state learning, QSSM demands significantly fewer parametric degrees of freedom (DOF) to reach the same approximating error. The generating Lie algebra of an \( n \)-qubit universal QNN model has to span \( \text{SU}(2^n) \), resulting in a model DOF of \( O(4^n) \). On the contrary, since the \( k \)-th scattering layer involves at most \( (\lfloor \frac{n}{2} \rfloor + 1) \) quantum registers, the total DOF of QSSM experiences a quadratic reduction to at most \( O(4^{\lfloor \frac{n}{2} \rfloor}) \). Also, to learn the polynomial rank-bounded target state \( \rho \), i.e., \( r_{\max} = \max R_\rho \sim O(\text{Poly}(n)) \). The DOF required for each scattering layer in QSSM scales \( O(\text{Poly}(n)) \). Therefore, the entire model comprises fewer quantum gates, rendering this approach considerably more hardware-efficient. ### 3.2 Avoiding Barren Plateaus Trainability is a critical challenge for the usage of quantum neural networks. Using a global deep QNN model brings stronger expressibility despite significantly increasing the randomness of initialization. Therefore, the initial gradient of trainable parameters in the model would exponentially vanish as the system scales up, called the Barren Plateau (BP) issue [McClean et al. (2018)]. With the diffusion of local quantum state information, QSSM has illustrated a potential to address trainability issues by focusing on subsystems in each scattering layer instead of the whole state. From the perspective of adaptive learning, we align the reduced quantum states of the \( k \)-th subsystem by minimizing the \( k \)-th adaptive cost function of equation 3 during the respective layer training, \[ C_k(\theta) = \| \sigma_k(\theta) - \rho_k \|_2^2 = \text{Tr} \left[ (\sigma_k(\theta) - \rho_k)(\sigma_k(\theta) - \rho_k)^\dagger \right], \] where \( \| A \|_2 \) for some linear operator \( A \) denotes the Schatten-2 norm, \( \sigma_k(\theta_k) \) and \( \rho_k \) represent the \( k \)-th scattering layer produced state and the \( k \)-th reduced target state, respectively. In this section, we show that QSSM has explicit advantages in trainability by investigating the statistical properties of the partial gradient with respect to particular layer parameters. For the cost gradient \( \partial_\mu C_k \) regarding the \( \mu \)-th trainable parameter in the \( k \)-th scattering layer denoted as \( U_k(\theta) = U^{(k)}_+(\theta_+)\epsilon^{-i\theta_\mu H_\mu}U^{(k)}_-(\theta_-) \), all the parameters in the layer are represented in a parameter vector \( \theta = (\theta_+, \theta_\mu, \theta_-) \), where \( \theta_- \) and \( \theta_+ \) represent the parameters of the forward and the backward parts within the \( k \)-th scattering layer having \( e^{-i\theta_\mu H_\mu} \) centralized. The results are summarized. Proposition 2 Given the state learning algorithm stated in Proposition 1 for an n-qubit pure target state \( \rho \) represented by \( n \) ordered quantum registers \( q_1, q_2, \cdots, q_n \) with a rank sequence \( R_\rho = \{ r_1, r_2, \cdots, r_{n-1}, r_n \} \), if one of the \( U^{(k)}_\pm \) in the k-th scattering layer \( U_k \) forms at least local unitary 4-design, the expectation and the variance of \( C_k \) with respect to \( \theta_\mu \) can be upper bounded by, \[ \mathbb{E}[\partial_\mu C_k] = 0; \quad \text{Var}[\partial_\mu C_k] \in O \left( \frac{g(\rho_k)}{r_k} \right), \] where the expectation is computed regarding the Haar measure and the factor \( g(\rho_k) \) scales polynomially in \( \text{Tr}[\rho_k^2] \) known as the purity of \( \rho_k \). The formal statement of Proposition 2 is presented in Appendix C. This proposition notably implies that the gradient magnitude is significantly determined by \( r_{\text{max}} \) in \( R_\rho \) rather than the total number of quantum registers \( n \). In other words, the gradient magnitude can escape from barren plateaus by carefully setting the width of each scattering layer to adapt to the target state. A typical example is to learn an n-qubit GHZ state, which, by its symmetry, requires setting \( w_k \leq 2 \) for all scattering layers in QSSM and hence achieves \( O(1) \) upper bound in the variance of the gradient. Moreover, Proposition 2 implies that QSSM can efficiently facilitate the learning of any pure states with polynomial-scaling \( r_{\text{max}} \) in \( n \). This encompasses a broad class of quantum states, including slightly entangled states [Vidal (2003)] and matrix product states [Perez-Garcia et al. (2006)], which extends the efficient-learnable region of quantum states using quantum neural network models. Even in the case where \( r_{\text{max}} \) scales exponentially, the gradient magnitude still gains a square root enhancement by the bounded variance of \( O(2^{-\lfloor n/2 \rfloor}) \) compared with the conventional model, scaling as \( O(2^{-n}) \) to reach the same learning accuracy. One may also apply the previous statement by allowing the error tolerance on the state learning and omitting the influence of the tail eigenvalues of the target states based on QPCA. Therefore, the efficient training condition of QSSM still applies to the low-rank state approximation learning by fixing a maximum scattering layer width. 4 QUANTUM SEQUENTIAL SCATTERING MODEL The fundamental idea of state learning using the quantum sequential scattering model (QSSM) is to composite the target states by gradually aligning reduced density matrices of subsystems. The model diffuses the local quantum information into the global system, which can be considered a quantum analogy of the classical diffusion model. In contrast, the conventional QNN model handles the entire system at a time. We now present the overview of our QSSM with an efficient state learning algorithm. Suppose we have access to the copies of an n-qubit pure target state \( \rho = |\phi\rangle\langle\phi| \) from some other quantum instances. The target state can be represented in a system containing \( n \) ordered quantum registers. Recalling \( \rho_k \) as the reduced density matrix on the first \( k \) registers, i.e., \( \rho_k = \text{Tr}_{q_{k+1}:q_n} [\rho] \), our model aims to construct a purification \( |\psi_k(\theta_k)\rangle = U_k(\theta_k)|\psi_{k-1}\rangle \) of \( \rho_k \) at the \( k \)-th learning step \( (1 \leq k \leq n) \) by training the \( k \)-th scattering layer realized as a parameterised circuit \( U_k(\theta_k) \). Notice that the learning results from the previous step are naturally involved in the state \( |\psi_{k-1}\rangle \) having all first \( k \) registers aligned. The training of each layer is based on minimizing some adaptive cost functions, which in this work, we use the modified distance function of form [3], where the \( k \)-th layer output state \( \sigma_k(\theta_k) = \text{Tr}_{q_{k+1}:q_n} [|\psi_k(\theta_k)\rangle\langle\psi_k(\theta_k)|] \). By hierarchically training the scattering layers until all registers are aligned, we could then construct the entire target through our trained quantum sequential scattering model. We summarize our quantum state learning algorithm via QSSM in Algorithm 1. 4.1 COST FUNCTION EVALUATION As a hybrid quantum-classical model, we declare some details of the realization of the model in the following. For the adaptive \( k \)-th step cost function defined in equation [3]. By rearranging equation [3] as, \[ C_k(\theta_k) = \text{Tr}[\sigma_k^2(\theta_k)] + \text{Tr}[\rho_k^2] - 2 \text{Tr}[\sigma_k(\theta_k)\rho_k], \] (5) Algorithm 1 Quantum sequential scattering model for (pure) state learning Require: Copies of the $n$-qubit target state $\rho = |\phi\rangle\langle\phi|$, Cost tolerance $\delta$. Ensure: The entire model has $n$ quantum registers as $q_1, q_2, \cdots, q_n$, and are initialized to $|0\rangle^{\otimes n}$. Parameter: All layer parameters are randomly initialized regarding Uniform distribution of $[0, 2\pi)$. Set $k = 1$ and maximum layer width $w_{\text{max}}$. 1: Update scattering layer width $w_k = k + 1$, $|\psi_k\rangle = |0\rangle^{\otimes n}$. 2: while $k \leq n$ do 3: if $k \leq \lfloor n/2 \rfloor$ then 4: $w_k = \min\{k + 1, w_{\text{max}}\}$. 5: else if $k > \lfloor n/2 \rfloor$ then 6: $w_k = \min\{n - k + 1, w_{\text{max}}\}$. 7: end if 8: Apply $U_k(\theta_k)$ to the quantum registers indexing $q_k$ to $q_{k+w_k-1}$, i.e., $q_k : q_{k+w_k-1}$. 9: Minimize $C_k(\theta_k)$ via running classical training algorithm based on the analytic cost function and gradient $\nabla_{\theta_k} C_k$ evaluations. The minimization stops until the cost difference reaches $\delta$. 10: $k = k + 1$. 11: Update $|\psi_k\rangle = U_k(\theta_k)|\psi_{k-1}\rangle$. 12: end while 13: Store all optimized $\theta_1, \cdots, \theta_n$ in classical memory. 14: return model reconstructed representation $|\psi_n\rangle = U_n \cdots U_1 |0\rangle^{\otimes n} \approx |\phi\rangle$. Output: The trained QSSM as an approximate state generator $U = U_n \cdots U_1$ of target $|\phi\rangle$. which is convex according to Theorem 2.10 of Carlen (2009). We chose this cost form since it can be efficiently evaluated on quantum hardware. The high-order state overlap terms involving $\text{Tr}[\rho^2]$ and $\text{Tr}[\rho\sigma]$ can be evaluated via swap test Barenco et al. (1997), which have been experimentally demonstrated on real quantum devices Islam et al. (2015); Linke et al. (2018). The training of the $k$-th layer can be described as finding the $k$-th step optimal parameters $\theta_k^{\text{opt}}$ so that $C_k(\theta_k^{\text{opt}})$ is minimized to approximately zero. To implement that, classical gradient-based and gradient-free methods, such as ADAM and COBYLA Kingma & Ba (2014); Powell (1994), can either be used during optimizations. Other metrics can also be employed in training procedures, and we left this aspect open for future research. 4.2 Analytic Gradient Evaluation Further, the analytical gradients of the cost function in equation 5 can be computed efficiently, making the gradient-based scheme a prospective candidate for the training processes. According to Schuld et al. (2018); Mitarai et al. (2018); Ostaszewski et al. (2019); Wang et al. (2021b). Suppose the $k$-th layer $U_k$ consists of the gates satisfying the parameter-shift rule Mitarai et al. (2018); Schuld et al. (2018) and contains $m$ trainable parameters. Each optimization iteration is driven by the estimations of cost gradient given by, $$\nabla_{\theta_k} C_k(\theta_k) = \left( \partial_1 C_k(\theta_k), \cdots, \partial_m C_k(\theta_k) \right),$$ where $\partial_\mu := \frac{\partial}{\partial \theta_k^\mu}$ indicating the partial derivative with respect to a fixed $\theta_k^\mu$ in the $k$-th layer. In particular, we derive the analytic gradient of $C_k$ as follows, $$\partial_\mu C_k^* = \langle G_k^* \rangle (\theta_k^\mu)^* + \frac{i}{2} - \langle G_k^* \rangle (\theta_k^\mu)^* - \frac{i}{2}$$ The symbol $*$ indicating the corresponding quantity evaluated at $\theta_k = \theta_k^*$. $G_k$ is a Hermitian operator involves both $\sigma_k$ and $\rho_k$ having an expression, $$G_k(\theta_k) := \Delta_k(\theta_k) \otimes \Gamma_k$$ where $\Delta_k(\theta_k) = \sigma_k(\theta_k) - \rho_k$ representing the $k$-th step state difference between two density matrices; $\Gamma_k$ is the maximally mixed state $I/d$ where $I$ is the identity operator of dimension $d = 2^{w_k-1}$. $\Gamma_k = 1$ when $w_k = 1$. The bra-ket operation in the analytic form, $\langle A \rangle_\alpha = \langle \psi_{k-1} | U_k^\dagger(\theta_k) A U_k(\theta_k) | \psi_{k-1} \rangle$ for some Hermitian operator $A$ is evaluated at $\theta_k^\mu = \alpha$. This quantity of $G_k$ in equation 7 indicates the expectation value of $G_k$ regarding the $k$-th step variational ansatz. | Physical States | Global QNN | QSSM with max. layer widths | |--------------------------|------------|-----------------------------| | | | 2 | 3 | 4 | 6 | | XXX model GS | 0.533 | 0.523 | 0.883 | 0.915 | 0.956 | | XXZ model GS | 0.523 | 0.750 | 0.887 | 0.954 | 0.952 | | LiH molecule GS | 0.531 | 0.978 | 0.976 | 0.967 | 0.982 | | Algorithmic States | | | | | | | GHZ state | 0.535 | 0.994 | 0.993 | 0.992 | 0.978 | | W state | 0.527 | 0.990 | 0.992 | 0.982 | 0.985 | | Gaussian distribution | 0.561 | 0.969 | 0.985 | 0.976 | 0.986 | | MNIST data encoding | 0.330 | 0.517 | 0.759 | 0.891 | 0.903 | | Random state | 0.317 | 0.338 | 0.768 | 0.856 | 0.889 | Fig 2: Effectiveness validation of QSSM in learning diverse 12-qubit quantum states regarding their final state fidelities. On the right, we show the QSSM learnt state (b) from the MINST dataset concerning the original data (a) using amplitude encoding. With different maximum layer widths, our QSSM outperforms global QNN on state learning tasks. $|\psi_k\rangle$ evaluated at $(\theta^\mu_k)^* \pm \pi/2$ where all other scattering layers remain unchanged. The detailed derivation of these definitions and forms can be found in Appendix D. Each partial derivative of $C_k$ at $\theta^*_k$ can be explicitly determined by equation [7], which can be efficiently computable via shifting the corresponding parameter and applying variational quantum eigensolver [Peruzzo et al., 2014]. The gradient-based optimization could be applied to the cost by specifically updating the parameters $\theta_k$ in the $k$-th layer as, $$\theta_k \leftarrow \theta^*_k - \eta \nabla_{\theta_k} C_k(\theta^*_k)$$ where $\eta$ is the learning rate settled for the classical optimizers, defining the iteration step size. The cost function would converge to the optimal minimum by iterating the training processes. We then repeat the above procedures for each $k$-th layer to complete the model training with a final output circuit representation $U(\theta^{opt}) = U_n(\theta^{opt}_n) \cdots U_1(\theta^{opt}_1)$ to finish the state learning. 5 Numerical Experiments As described above, the adaptation of our quantum sequential scattering model indicates the underlying enhancement of information diffusion in quantum state learning. We now present numerical experiments to illustrate the effectiveness and trainability of QSSM. We first conduct numerical simulations on QSSM for learning 12-qubit quantum states with physical or algorithmic meaning and compare our results with the performances from the conventional QNN model. The ground states from Heisenberg (XXX & XXZ) models [Takahashi, 1971] and the LiH molecular model are pre-determined via the OpenFermion library developed by [McClean et al., 2020]. For the Gaussian distribution and MNIST data learning experiments, the distribution and image data are normalized and mapped to the unit quantum state vectors of dimension $2^n$ via amplitude encoding [Schuld, 2021] with automatic padding of 0's filling out the extra grayscale pixels. In our numerical simulations involving the global QNN and the QSSM, we employ a general hardware efficient ansatz (HEA) [Kandala et al., 2017] of depth $d = 20$ with random initialized parameters for both the global model and each scattering layer in QSSM. The optimization uses the ADAM optimizer with a learning rate of 0.1 and cost tolerance 0.001, spanning 200 iterations. As shown in Fig. 2, comparing the outcomes with those of the global QNN, we discern clear advantages exhibited by QSSM, which consistently attains notably high fidelity in learning diverse quantum states. Conversely, the conventional model does not perform well, primarily due to the significantly decreased convergence speed during the training processes with a large number of qubits. Besides, states with exponential growth in Schmidt ranks are not necessarily hard to learn. Only highly entangled states, e.g., random states and maximally entangled states (MES) [Gisin & Bechmann-Pasquinucci, 1998], are challenging for QSSM. Those with concentrated Schmidt co- Fig 3: Noisy quantum simulation of QSSM for learning a 4-qubit GHZ state. (a) Comparison of the variation of cost function noisy quantum simulation and noise-free simulation. For both cases, the optimization was processed via COBYLA optimizer Gomez & Hennart (1994) on swap-test estimated cost values. (b) The distribution of measurement outcomes generated noise-freely from the state obtained by the noisy trained QSSM. The figure validates the efficacy and efficiency of QSSM in noisy environments, consequently reinforcing our method’s practical applicability. coefficients, though owning large ranks, can be learnt up to a high fidelity Liu et al. (2022b) with limited resources. In Table 2, we reasonably constrain the maximum scattering layer widths to some fixed values, which counterintuitively yield superior performance with smaller layer width. Larger values of \( w_{\text{max}} \), contrarily, decrease the QSSM performances of state learning. A plausible explanation for this phenomenon could be the over-parameterization and the mild BP effect during the training of the halved-dimensional scattering layers. Notably, learning random state undoubtedly obtains the worst learning results. We also examine the noise robustness of using QSSM to learn a 4-qubit GHZ state on the IBMQ Qiskit simulator Qiskit contributors (2023). We build our noise model from single qubit and multi-qubit depolarizing channels (DCs) and thermal relaxation channels (TRCs) Georgopoulos et al. (2021). The error rate of DCs are set to \( 10^{-3} \), and the \( T_1 \), \( T_2 \) and gate time of TRCs are set to 1000 \( \mu s \), 100 \( \mu s \) and 1 ns respectively. At each step, we run the optimization of the QSSM circuit 20 times in parallel and use the parameters that correspond to the lowest cost to update the circuit before going to the next step. This trick can significantly alleviate the randomness arising from sampling of bit strings in the measurement of quantum circuits. Shown in Fig. 3, each learning step has cost converged well compared with the ideal training in (3). The final fidelity between the quantum state generated from QSSM and the true GHZ state could reach 91%, giving almost the same statistical behaviours plotted from the sampling experiments (3). From the analytical description and numerical demonstration, we see that QSSM has the ability to learn arbitrary quantum states with high fidelity compared to the conventional model. The diffusion strategy only requires narrow circuits in learning quantum states that are weakly entangled, thus being extremely efficient in learning such a class of quantum states. We then present the result to demonstrate Proposition 2 by comparing the gradient variances of cost equation 3 as a function of the number of registers for QSSM and global QNN model. We typically investigate the values in the first step, the middle step (\( \frac{N}{2} \)-th step), and the last step of the QSSM learning procedure by looking into a single parameter \( R_Z \) gate in the middle of each scattering layer. By assuming the two parts \( U^{(k)}_\pm \) split by the \( R_Z \) gate are deep enough to form local unitary 4-designs, we sample local Haar random unitaries Dankert et al. (2009) to simulate the behaviours of random initialization on \( U^{(k)}_\pm \) and compute the gradient variances with respect to the parameter in \( R_Z \). Similar experiments are performed for the conventional QNN model by sampling global Haar unitaries with a \( R_Z \) gate sandwiched in. We target the GHZ state and the ground state... of the Heisenberg model, as before, with maximum width $w_{\text{max}}$ being 2 and 4, respectively. The variance values are computed from sampling 500 Haar unitary pairs for both cases. ![Graphs](image) **Fig 4:** Comparison of the gradient variances as a function of the number of qubits on a semi-log plot from different steps in QSSM and global QNN computed by sampling Haar random unitaries. Panel (a) and (b) correspond to the learning of the GHZ state and the ground state of the Heisenberg model, respectively. The red, black and blue lines represent the gradient magnitudes of the first step, $\frac{n}{2}$-th step and the last step training, respectively, comparing with the global QNN results in yellow. Our method apparently outperforms conventional global QNN in terms of gradient variance scaling, indicating the absence of barren plateaus. As we can observe in Fig. 4, the variance of the gradient vanishes exponentially with the number of qubits when using the randomly initialized global QNNs. In contrast, QSSM demonstrates a constant scaling of variance magnitude. We note that there is a decay of the gradient variance of the middle step in panel (b). Nevertheless, this decay is caused by a constant factor $g(\rho_k)$ that originates from the nature of the physical system and does not exponentially influence the training processes. ## 6 Conclusion and Discussion In this paper, we have presented the development and application of the Quantum Sequential Scattering Model (QSSM) for quantum state learning. Our model is inspired by the classical diffusion model, which the designing of it involves quantum information theory and adaptive quantum machine learning techniques. Our theoretical analysis and numerical experiments demonstrate the superiority of the QSSM over conventional QNN approaches in terms of training speed and learning accuracy. In particular, the QSSM addresses the barren plateaus issues and provides an efficient solution to learning high-dimensional unknown quantum states based on sequentially learning the reduced target states. Moreover, we have analyzed the impact of increasing entanglement, a key property of quantum states, on the performance and efficiency of the QSSM. Our results show that the model can effectively handle polynomially increased entanglement, enabling us to learn complex quantum states accurately. Numerical demonstrations have shown out-performances for learning physical and algorithmic quantum states in terms of their rank-restricted approximations, indicating the broad applicability of QSSM state learning and the deep connection between state learning and quantum entanglement. There are remaining issues of QSSM for future discussion. Different choices of scattering layers would influence the learning performance, which has to be exemplified. How to further improve the state fidelity provided the high fidelity state from QSSM could become a significant open question. Understanding and resolving the effect of over-parameterization from QSSM should be explained. A theoretical performance guarantee and the connection between scattering layer dilation and QSSM state learning information flow should be established for a complete story of truncated state learning. We also expect some extended applications of QSSM as a new quantum generative model instead of only state learning on near-term quantum devices. REFERENCES Scott Aaronson. Shadow tomography of quantum states. In *Proceedings of the 50th annual ACM SIGACT symposium on theory of computing*, pp. 325–338, 2018. James A Anderson, Jack W Silverstein, Stephen A Ritz, and Randall S Jones. Distinctive features, categorical perception, and probability learning: Some applications of a neural model. *Psychological review*, 84(5):413, 1977. Adriano Barenco, Andre Berthiaume, David Deutsch, Artur Ekert, Richard Jozsa, and Chiara Macchiavello. Stabilization of quantum computations by symmetrization. *SIAM Journal on Computing*, 26(5):1541–1557, 1997. Eric Baum and Frank Wilczek. Supervised learning of probability distributions by neural networks. In *Neural information processing systems*, 1987. Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. Quantum machine learning. *Nature*, 549(7671):195–202, 2017. Eric A. Carlen. Trace inequalities and quantum entropy: An introductory course. 2009. M. Cerezo, Akira Sone, Tyler Volkoff, Lukasz Cincio, and Patrick J. Coles. Cost function dependent barren plateaus in shallow parametrized quantum circuits. *Nature Communications*, 12(1):1791, dec 2021. ISSN 2041-1723. doi: 10.1038/s41467-021-21728-w. URL http://arxiv.org/abs/2001.00550http://dx.doi.org/10.1038/s41467-021-21728-whttp://www.nature.com/articles/s41467-021-21728-w. Anirban N Chowdhury, Guang Hao Low, and Nathan Wiebe. A variational quantum algorithm for preparing quantum gibbs states. *arXiv preprint arXiv:2002.00055*, 2020. Iris Cong, Soonwon Choi, and Mikhail D Lukin. Quantum convolutional neural networks. *Nature Physics*, 15(12):1273–1278, 2019. I. Csiszar. *I*-Divergence Geometry of Probability Distributions and Minimization Problems. *The Annals of Probability*, 3(1):146 – 158, 1975. doi: 10.1214/aop/1176996454. URL https://doi.org/10.1214/aop/1176996454. Christoph Dankert, Richard Cleve, Joseph Emerson, and Etera Livine. Exact and approximate unitary 2-designs and their application to fidelity estimation. *Physical Review A*, 80(1):012304, 2009. Luca Franceschi, Mathias Niepert, Massimiliano Pontil, and Xiao He. Learning discrete structures for graph neural networks. In *International conference on machine learning*, pp. 1972–1982. PMLR, 2019. Lucas Friedrich and Jonas Maziero. Avoiding barren plateaus with classical deep neural networks. *arXiv preprint arXiv:2205.13418*, 2022. Motohisa Fukuda, Robert König, and Ion Nechita. RTNI - A symbolic integrator for Haar-random tensor networks. *Journal of Physics A: Mathematical and Theoretical*, 52(42):1–24, 2019a. ISSN 17518121. doi: 10.1088/1751-8121/ab434b. Motohisa Fukuda, Robert König, and Ion Nechita. Rtni—a symbolic integrator for haar-random tensor networks. *Journal of Physics A: Mathematical and Theoretical*, 52(42):425303, 2019b. Xin Geng. Label distribution learning. *IEEE Transactions on Knowledge and Data Engineering*, 28(7):1734–1748, 2016. Konstantinos Georgopoulos, Clive Emary, and Paolo Zuliani. Modeling and simulating the noisy behavior of near-term quantum computers. *Physical Review A*, 104(6):062432, 2021. Sanjib Ghosh, Tomasz Paterek, and Timothy CH Liiew. Quantum neuromorphic platform for quantum state preparation. *Physical Review Letters*, 123(26):260404, 2019. Nicolas Gisin and Helle Bechmann-Pasquinucci. Bell inequality, bell states and maximally entangled states for n qubits. *Physics Letters A*, 246(1-2):1–6, 1998.
l7n59aufeT
The reviewer is sceptical about using mini-batch SGD in the inner loop because, as discussed by the authors, the tokens are non-iid and it seems suboptimal to perform reconstruction on random subsets of the tokens in isolation.
LEARNING TO (LEARN AT TEST TIME) Anonymous authors Paper under double-blind review ABSTRACT We reformulate the problem of supervised learning as learning to learn with two nested loops (i.e., learning problems). The inner loop learns on each individual instance with self-supervision before final prediction. The outer loop learns the self-supervised task used by the inner loop, such that its final prediction improves. Our inner loop turns out to be equivalent to linear attention when the inner-loop learner is only a linear model, and to self-attention when it is a kernel estimator. For practical comparison with linear or self-attention layers, we replace each of them in a transformer with an inner loop, so our outer loop is equivalent to training the architecture. When each inner-loop learner is a neural network, our approach vastly outperforms transformers with linear attention on ImageNet from $224 \times 224$ raw pixels in both accuracy and FLOPs, while (regular) transformers cannot run. 1 INTRODUCTION Test-time training (TTT) is an algorithmic framework for machine learning. The core idea is that each test instance defines its own learning problem, with its own target of generalization (Sun et al., 2020). Since the test instance comes without its label, TTT is performed with a self-supervised task such as reconstruction. Performance should improve on this particular instance for the self-supervised task, because that is the objective optimized by TTT. But will such a process lead to better performance for the main task we actually care about? If improvement for a self-supervised task transfers to a given main task, we say the two tasks are aligned (Sun et al., 2020). In prior work, task alignment has been an art, combining ingenuity with trial and error (Gandelsman et al., 2022; Wang et al., 2023). Crucially, the amount of ingenuity in task design does not scale with more data and compute. Our main approach is to learn an aligned self-supervised task from data, instead of handwriting it from human priors. Specifically, we learn a self-supervised task such that TTT on it actually improves performance on the main task. Since TTT already defines a learning problem, learning its self-supervised task is a form of learning to learn, i.e., meta-learning or bi-level optimization (Schmidhuber, 1987). The literature refers to the two nested learning problems as the inner and outer loop. At training time, the inner loop learns with self-supervision on each training instance individually, as if it were a test instance. The outer loop learns to align the self-supervised task with the main task on the entire training set. At test time, we only invoke the inner loop, i.e., TTT. We name our algorithm MTTT, with M for meta. To better understand MTTT, we look at its simplest nontrivial instantiation, where all components are linear models, and the inner loop takes only one gradient step. Given fixed outer-loop parameters, the inner loop turns out to be equivalent to forward inference with linear attention, i.e., self-attention without softmax (Katharopoulos et al., 2020). For a linear transformer, i.e., transformer with only linear attention layers, we can replace each with an inner loop. Nesting multiple such inner loops into one outer loop, the most naive case of MTTT is equivalent to training a linear transformer. It also turns out that our inner loop with a particular kernel estimator is theoretically equivalent to self-attention (with softmax), so MTTT with multiple such inner loops is equivalent to training a transformer. This suggests that our framework is compatible with existing, successful architectures. To extend beyond existing equivalences, we investigate TTT with neural networks. This performs much better than TTT with linear models (i.e., linear transformers), in settings where transformers run out of memory and time. Given the freedom inside our inner loop, we can augment it with heuristics like output normalization and stochastic gradient descent that improve results even more. Our inner loop mirrors regular (non-meta) learning in design, because it breaks each instance into pieces, i.e., tokens, that are explicitly treated as data. This perspective is further validated by our empirical evidence, which is not explained through any existing perspective for architecture design. Given the historic success of deep learning over kernels and linear models, we conjecture that such success can potentially be replicated in our inner loop, with more compute and data under MTTT. 2 INNER LOOP: TEST-TIME TRAINING WITH RECONSTRUCTION The architecture for TTT has a shared feature extractor with two output heads. The self-supervised task has a head \( g \), and the main task has a head \( h \). At test time, the model can only learn from the self-supervised task, so the heads share a feature extractor \( f \). This way, TTT can update the shared features, thus helping the main task if it uses the same kind of features as the self-supervised task. Altogether, this architecture looks like the letter ‘Y’, where \( f \) is the stem, \( g \) and \( h \) are the branches. In principle, TTT is compatible with any choice of self-supervised task. Here we focus on one general-purpose and domain-agnostic family of self-supervised tasks – reconstruction, since it has been highly effective in prior work (Vincent et al., 2008; Pathak et al., 2016; Brown et al., 2020; Bao et al., 2021; He et al., 2021). For reconstruction, the feature extractor \( f \) is also known as the encoder, and the self-supervised head \( g \) as the decoder; \( g \circ f \) together is called an autoencoder. Following a standard process called tokenization, each instance is always broken into a sequence of \( n \) tokens, so we denote both the instance and sequence by \( X = (x_1, \ldots, x_n) \), with token \( x_i \in \mathbb{R}^d \). Our basic unit of reconstruction is each individual token \( x_i \). The reconstruction target is \( x_i \) itself, but the input is transformed by a given function \( \phi \), such as adding noise (Vincent et al., 2008) and random masking (He et al., 2021). For each \( X \), we optimize the parameters of \( f \), denoted by \( W \). Overall, the self-supervised loss is \[ \ell(W; X) = \frac{1}{2n} \sum_{i=1}^{n} \| g \circ f (\phi(x_i); W) - x_i \|^2. \] Note that the decoder \( g \) is also considered given within the scope of TTT, which only updates \( W \). Optimization is performed with \( T \) gradient steps. For each \( t = 1, \ldots, T \), \[ W_t = W_{t-1} - \eta \nabla \ell(W_{t-1}; X), \] where the initial value \( W_0 \) and the learning rate \( \eta \) are given, like \( \phi \) and \( g \). For the main task, we also transform its input \( x_i \) by a given function \( \psi \), in the spirit of symmetry to \( \phi \) for the self-supervised task. In prior work, \( \psi \) has mostly been the identity transform, but Section 3 will make \( \psi \) nontrivial, adding expressiveness to the outer loop. Next, we produce the main task outputs by applying \( h \circ f \) individually on each \( \psi(x_i) \). For convenience, we overload \( h, f \) and \( \phi \) so they can produce an output sequence from an input sequence: \[ X_{\text{out}} = h \circ f (\psi(X); W_T) = \left( h \circ f (\psi(x_1); W_T), \ldots, h \circ f (\psi(x_n); W_T) \right). \] Equation 3 could be the last step for main tasks that require \( n \) predictions (e.g., language modeling), but for other tasks that require a single prediction (e.g., object recognition), it is standard to apply an aggregation function across the output sequence, predicting \( \hat{y} = \text{aggregate}(X_{\text{out}}) \) in the end. --- 1 To be precise, \( x_i \in \mathbb{R}^d \) is actually the token’s embedding, not the token itself. For \( X \) a paragraph of text, each token is usually a (sub-)word; for \( X \) an image, each token is usually a patch or pixel. While the type of tokens can potentially be non-numeric, standard techniques are available to embed them into vectors. 2 While the decoder \( g \) also contains learnable parameters, we do not optimize them during TTT in this paper. Our choice, although nonstandard for autoencoders, makes learning to learn conceptually easier in Section 3. Moreover, Sun et al. (2020) and Gandelsman et al. (2022) have shown that whether or not \( g \) is optimized during TTT makes little empirical difference. In fact, for \( T = 1 \) (using notations defined for Equation 2), whether or not a gradient step is taken on \( g \) does not matter at all, because \( g \) affects the final prediction only through \( W_1 \). 2.1 Context Window as a Dataset In standard terminology, \( X = (x_1, \ldots, x_n) \) is called the context window, and \( n \) the window length. But for TTT, \( X \) is a dataset of size \( n \), where each token \( x_i \) is actually a non-independent and non-identically distributed piece of data. This intuition is consistent with our algorithm: Equation 1 simply sums the losses individually across tokens, just like across pieces of data; Equation 3 also processes each \( x_i \) individually as a “test token”, like how a fixed model processes each test instance. Tokenization enables us to reuse \( f \) on \( n \) different parts (tokens) of \( X \), by treating them as pieces of data, and \( X \) as a dataset. It brings the units of operation for TTT “one level below” their traditional sense in machine learning, where \( X \) is a piece of data, and a collection of \( X \)'s is a dataset. TTT can be applied without tokenization, but then \( X \) would be singleton, unless augmentations are used to create an artificial batch like in Sun et al. (2020). 3 Outer Loop: Learning the Self-Supervised Task for TTT As noted above, TTT does not modify the initialization \( W_0 \) for encoder \( f \), the transformations \( \phi \) and \( \psi \), or the decoder \( g \) and main task head \( h \). Altogether, these important components must be determined outside of the scope of TTT. Prior work has tried various heuristics, discussed in Subsection 6.2. Here we take the more principled approach of directly optimizing the final prediction loss on the main task after \( T \) steps of TTT. We first explicitly express the learnable parameters that were hidden in Section 2 because they were considered given within the scope of the inner loop. These are the parameters of \( g, h, \phi \) and \( \psi \), denoted by \( \theta_g, \theta_h, \theta_\phi \) and \( \theta_\psi \). We group them together with \( W_0 \) into \( \Theta = (\theta_g, \theta_h, \theta_\phi, \theta_\psi, W_0) \), since they will all be learned in the outer loop. Technically, \( \Theta \) should also contain the learnable parameters of aggregate, which we omit for convenience. Now we derive the outer-loop objective \( L_T \). Denote the main task loss by \( L \), e.g. the cross-entropy loss. In the trivial case, for \( T = 0 \), i.e. without TTT, the final prediction loss is exactly \( L \). To be precise, for each instance \( X \) with unknown label \( y \), \[ L_0(\Theta; X, y) = L(h \circ f(\psi(X); W_0), y). \] For \( T = 1 \), the parameters of \( f \) become \( W_1 = W_0 - \eta \nabla \ell(W_0; X) \), as defined in Equation 1. Therefore, the final prediction loss for the main task is \[ L_1(\Theta; X, y) = L(h \circ f(\psi(X); W_1), y) = L(h \circ f(\psi(X); W_0 - \eta \nabla \ell(W_0; X)), y). \] For any \( T \geq 1 \), \( \theta_g \) and \( \theta_\phi \) implicitly determine the inner-loop loss function \( \ell \) defined in Equation 1, therefore affect \( L_T \) through \( \nabla \ell \). In other words, \( \theta_g \) and \( \theta_\phi \) parameterize the self-supervised task.\(^3\) Going further, for \( T \geq 2 \), \[ L_T(\Theta; X, y) = L(h \circ f(\psi(X); W_T), y) \] would be cumbersome to write out in terms of \( W_0 \), but can be expressed recursively, with \( W_t \) defined in Equation 2 for each \( t = 1, \ldots, T \). At training time, the outer loop calculates \( L_T \) individually for each labeled training instance \( X \), then optimizes the average \( \bar{L}_T \) on the entire training set with (a variant of) stochastic gradient descent. Calculating \( \nabla L(\Theta; X, y) \) requires taking gradients through \( \nabla \ell(W_t; X) \) for \( t = 0, \ldots, T - 1 \), since the latter is implicitly a function of \( W_0, \theta_g \) and \( \theta_\phi \). This turns out to be easily programmable in JAX, and surprisingly efficient in practice, as we will show in Section 5. 4 Choice of Learner for Inner Loop While our inner loop is a sequence of forward and backward operations, it can also be represented as a single forward operation on its unrolled computation graph, so the outer loop becomes regular (non-meta) learning using this graph as a fixed model. It turns out that for simple choices of the inner-loop learner, this equivalent graph can be interpreted through the lens of architecture design. \(^3\)Note that even though \( \theta_g \) and \( \theta_\phi \) are included as arguments of \( L_T \) for all values of \( T \), they do not actually matter for \( L_0 \). When the inner loop is trivial, i.e runs for 0 iteration, learning to learn collapses to regular (non-meta) learning, and the self-supervised task does not matter. 4.1 TTT with Linear Models: Equivalence to Linear Attention The simplest choice for the feature extractor \( f \) is a linear model: \[ f(x; W) = Wx. \] And the outer-loop components \( g, h, \phi \) and \( \psi \) are linear as well. Specifically, \[ g(x; \theta_g) = \theta_g^T x, \quad h(x; \theta_h) = \theta_h x, \quad \phi(x; \theta_\phi) = \theta_\phi x, \quad \psi(x; \theta_\psi) = \theta_\psi x. \] To make the math even simpler, we always initialize the feature extractor with \( W_0 = 0 \). Under this construction, the self-supervised loss in Equation 1 becomes \[ \ell(W; X) = \frac{1}{2n} \sum_{i=1}^{n} \| g \circ f (\phi(x_i); W) - x_i \|^2 = \frac{1}{2n} \sum_{i=1}^{n} \| \theta_g^T W \theta_\phi x_i - x_i \|^2. \] For \( W_0 = 0 \), one gradient step with learning rate \( \eta = 1 \) produces \[ W_1 = W_0 - \nabla \ell (W_0; X) = \frac{1}{n} \sum_{i=1}^{n} (\theta_g x_i)(\theta_\phi x_i)^T. \] Using \( W_1 \) as the updated weights for the feature extractor, the updated features for each token \( x_j, j = 1, \ldots, n \), becomes \[ f(\psi(x_j); W_1) = \frac{1}{n} \sum_{i=1}^{n} (\theta_g x_i)(\theta_\phi x_i)^T \theta_\psi x_j. \] This happens to be linear attention (explained in Appendix A), where \( \theta_\phi, \theta_\psi, \theta_g \) are the key, query, value weights. \( h \) is the projection operation used for multi-head attention, discussed in Appendix B. 4.2 TTT with Kernels: Equivalence to Self-Attention So far, we have considered \( f \) with explicit parameters. But machine learning is more than just parametric models and gradient-based optimization. Here we consider \( f \) as a non-parametric learner. Recall that non-parametric learning produces an algorithmic function controlled by the training data \( x_1, \ldots, x_n \), without explicit parameters of a fixed shape. So our notation for the encoder changes from \( f(x; W) \) to \( f(x; x_1, \ldots, x_n) \). For example, the nearest neighbor \( f(x; x_1, \ldots, x_n) \) simply looks for the most similar piece of training data. Some other non-parametric learners are: support vector machines (SVMs), radial basis function networks, and kernel ridge regression. But unlike most cases of non-parametric learning, our data for TTT come without labels, since \( x_1, \ldots, x_n \) are just tokens of an unlabeled test instance \( X \). Analogous to parametric learners, non-parametric ones can also learn with self-supervision to produce better features for a main task downstream. So for each \( i = 1, \ldots, n \), we create each label \( z_i = \theta_V x_i \) from the unlabeled input \( x_i \) itself, where \( \theta_V \) is an outer-loop parameter like \( \theta_g \) in the parametric case. The popular self-attention (with softmax) is equivalent to TTT with \( f \) as the time-honored Nadaraya-Watson estimator (Bierens, 1988; Cai, 2001), which outputs a locally weighted average of labels \( z_i, i = 1, \ldots, n \), using a kernel \( \kappa \) as the weighting function: \[ f(x; x_1, \ldots, x_n) = \frac{1}{\sum_{i=1}^{n} \kappa(x, x_i)} \sum_{i=1}^{n} \kappa(x, x_i) z_i. \] See Appendix C for a detailed derivation of this estimator. We choose the kernel \( \kappa \) to be \[ \kappa(x, x'; \theta_K, \theta_Q) \propto e^{(\theta_K x')^T \theta_Q x'} \] where \( \theta_K \) and \( \theta_Q \) are known as bandwidth hyper-parameters for kernels. But for MTTT, they are outer-loop parameters like \( \theta_V \). As detailed in Appendix C, asymmetric kernels like our \( \kappa \) above have enjoyed a long tradition (Breiman et al., 1977; Chen, 2017). Altogether, Equation 12 and 13 combined is the same as self-attention, where \( \theta_K, \theta_Q, \theta_V \) are the key, query, value weights. Unlike the parametric case, TTT with kernels does not solve an optimization problem, therefore does not produce a different implementation from self-attention. While our equivalence here only provides an alternative interpretation, the fact that both linear models and kernels are empirically effective as inner-loop learners suggests that other learners might also be effective. 4.3 TTT WITH NEURAL NETWORKS From the past three decades of progress in machine learning, we observe that the performance of \[ \text{deep learning} > \text{kernels} > \text{linear models} \] given enough data and compute. In Subsection 2.1, we discussed the perspective that our inner loop mirrors regular (non-meta) learning, at least in terms of algorithmic design. To collect empirical evidence for this perspective, we investigate if the ordering above is preserved within our inner loop. It is well known that transformers with self-attention (TTT with kernels) often outperform those with linear attention (TTT with linear models), i.e., linear transformers (Katharopoulos et al., 2020). This validates the rightmost link of the ordering within our inner loop. But TTT with neural networks has no existing equivalence, so we devote the rest of the paper to taking a small step in this huge search space. We delay implementation details such as architecture and optimization to Section 5, and end this subsection with one remaining conceptual implication. TTT with neural networks and linear models, or any parametric learner, has complexity linear in \( n \) for each test instance \( X = (x_1, \ldots, x_n) \), since complexity for each token is constant in \( n \), and only proportional to the number of parameters. TTT with any non-parametric learner, however, cannot have linear complexity by definition, since its complexity for each token cannot be constant in \( n \), i.e., amount of training data. For Nadaraya-Watson, complexity for each token happens to be linear. This serves as an alternative explanation for the quadratic complexity of self-attention. 5 EXPERIMENTS The goal of our experiments is not to be the top on leaderboards, but to evaluate our key perspective, that the inner loop mirrors regular (non-meta) learning, in terms of three qualities. 1) Descriptive: Does our equivalence to linear attention hold in practice? 2) Prescriptive: Does our perspective show a path for new methods with better performance? 3) Predictive: Does our perspective accurately explain the empirical behaviors of new methods? TTT layers. The cleanest and most practical way to answer these questions is to replace every attention layer in an architecture with a TTT inner loop, because ultimately, attention layers are only used as parts of an architecture. Since the inner loop here functions as a drop-in replacement for attention, we call it a TTT layer, which can also be thought of as an equivalent computation graph (discussed in Section 4). After dropping in the TTT layers, the entire architecture can be trained with MTTT, using the same recipe as that with attention layers, without TTT. Variants of MTTT. We call our method MTTT-Linear when encoder \( f \) is linear in each TTT layer, and MTTT-MLP when \( f \) is a multi-layer perception (MLP). We always keep \( g, h, \phi, \psi \) linear following Subsection 4.1. For MTTT-Linear, we always keep \( W_0 = 0 \) fixed to ensure equivalence to linear attention, since MTTT-Linear is only used to investigate descriptiveness. For MTTT-MLP, we experiment with the two design choices below, to investigate the prescriptive power of our perspective. For simplicity, we always set the inner-loop learning rate \( \eta = 1 \). Inner-loop architecture. For MTTT-MLP, the MLP architecture simply follows standard design in transformers. Concretely, our MLP has 2 linear layers with GELU activation in between; the input and output dimension are the same, and the hidden dimension is 4× as large. The only architectural change, called Decoder LN, is that we add a layer norm (LN) after the output of \( g \), to normalize the reconstruction outputs, in the spirit of He et al. (2021). We explain this design choice in Figure 2, deferred to the appendix due to space constraints. Inner-loop optimization. When the inner loop takes \( T > 1 \) steps, each gradient step, by default, uses the average loss over all the tokens, defined in Equation 1. But \( T \) steps make the inner loop \( T \times \) slower. Given the popularity of stochastic gradient descent (SGD) in deep learning, we use it for our inner loop. Specifically, we randomly split the \( n \) tokens into \( T \) mini-batches, each of size \( T/n \), and take one inner-loop step per mini-batch. Therefore, \( T \) steps of SGD combined consumes the same amount of compute as a full-batch gradient step over all the \( n \) tokens together. | Drop-in layer | Acc. (%) | Params. (M) | FLOPs | |------------------------|----------|-------------|-------| | Linformer (Wang et al., 2020b) | 71.9 | 22.2 | 0.9× | | Longformer (Beltagy et al., 2020) | 76.3 | 27.4 | 1.1× | | SOFT (Lu et al., 2021) | 74.6 | 23.5 | 0.9× | | Hyena (Poli et al., 2023) | 74.8 | 23.5 | 1.0× | | Self-attn. (Beyer et al., 2022) | 76.5 | 22.1 | 1.1× | | Linear attn. (Katharopoulos et al.) | 73.2 | 22.1 | 1.0× | | Linear attn. identity map | 73.0 | 22.1 | 1.0× | | MTTT-Linear | 72.8 | 22.1 | 1.1× | | MTTT-MLP | 74.6 | 24.6 | 1.5× | Table 1: Results on ImageNet. FLOPs are presented as relative to linear attention. Our inner-loop dataset is tiny, with \( n = 196 \). MTTT-Linear matches linear attention with identity map, as expected. MTTT-MLP outperforms both by a nontrivial margin, but is \( 1.5 \times \) slower than linear attention. Also as expected, self-attention, i.e. the original ViT performs the best. See Subsection 5.2 for details. ### 5.1 IMAGE NET We first experiment with the standard setting of ImageNet object recognition (Deng et al., 2009). Our benchmark architecture is Vision Transformer (ViT) (Dosovitskiy et al., 2020). We adopt the well-known recipe of Beyer et al. (2022) by the ViT authors, and their recommended setup for fast research turnaround – training ViT-Small for 90 epochs. With an accuracy of 76.5%, it is often regarded as a fast and competitive baseline. Its recipe splits each image into \( 14 \times 14 \) patches, then embeds each patch with a learned projection. So each \( X \) becomes \( n = 196 \) tokens. Thinking of the context window as training data for TTT, a dataset of size 196 is not nearly enough for deep learning, if adequate for a linear model. Since over-parameterized neural networks are known to be able to regularize themselves (Zhang et al., 2021), MTTT-MLP should not do poorly, but might not justify the extra compute. In addition, small \( n \) means our linear complexity is less of an advantage, in comparison to self-attention (with softmax). Our results in Table 1 confirm those expectations. MTTT-MLP outperforms MTTT-Linear by a small margin, but uses more FLOPs. If MTTT-MLP was using a smaller architecture that matches the FLOPs of MTTT-Linear, it would have performed worse. Self-attention, for which the training recipe was originally designed, performs the best. In terms of descriptiveness, MTTT-Linear almost exactly matches linear attention (identity map) – the 0.2% difference is likely due to random noise and loss of numeric precision. However, MTTT-Linear uses \( 0.1 \times \) more FLOPs as linear attention. This extra factor exists because the JAX compiler is unaware that the compiled inner loop will receive \( W_0 = 0 \) so all those terms involved can be eliminated. We manually calculated the total number of FLOPs for those terms involving \( W_0 \), and found that it matches the difference in FLOPs between MTTT-Linear and linear attention. Taking more gradient steps in the inner loop significantly improves accuracy of MTTT-MLP up to \( T = 4 \), as shown in the left panel of Figure 1. However, \( T \) steps on the full batch costs \( T \times \) number of FLOPs. So this improvement is predictive but not practically useful. We have experimented with SGD and found that it does not help here. Since \( n = 196 \) is already a small batch size, splitting 196 tokens into even smaller mini-batches for SGD is usually considered bad practice for deep learning. The right panel of Figure 1 shows the average \( \ell(W_t; X) \) across the test set, for TTT layer 6 (out of 12 in total). The plot for all layers is deferred to Figure 3 in the appendix due to space constraints, but the overall behavior is essentially the same across layers. The five lines are for \( t = 0, \ldots, T \), where \( T = 4 \), i.e. the optimal choice of \( T \) according to the left panel. For every epoch of outer-loop learning, average inner-loop loss decreases monotonically with more steps. The behavior of this novel inner loop matches that of regular learning with successful optimization. While MTTT has not been practically useful in this setting, its behavior matches our expectations, indicating that our perspective is predictive on top of descriptive. Note that every hyper-parameter Figure 1: More inner-loop steps improve accuracy up to $T = 4$ (left). Behavior of inner-loop loss mirrors regular (non-meta) learning (right). Table 2: Ablations on ImageNet. See Subsection 5.1 for details. is set according to Beyer et al. (2022), and we have not changed any to get the expected behavior. Our inner-loop learning rate $\eta$ has always been 1, derived from equivalence to linear attention. In Table 2, we ablate MTTT-MLP with the four combinations of whether or not to use Decoder LN and train $W_0$ in the outer loop. We choose these two factors since Decoder LN is our own design, and training $W_0$ goes a step further from equivalence to linear attention, which requires fixing $W_0 = 0$. Empirically, both components prove to be important for good performance. Therefore, we always keep them for future experiments, without spending more resources to ablate them. For additional context around our results, we run a few baselines that also have linear complexity. Linear attention as proposed by Katharopoulos et al. (2020) uses manually engineered features of the input tokens, instead of the input tokens themselves. We label the former with citation, and the latter with identity map. Other baselines have roughly the same accuracy as MTTT-MLP. Longformer stands out with the same accuracy as self-attention, but we find that the default window size for its sliding attention is $512 > 196$, so it happens to be the same as self-attention for $n = 196$. ### 5.2 ImageNet from 224 × 224 Raw Pixels To better evaluate our perspective that the inner loop mirrors regular (non-meta) learning, we need a setting where the sequence length $n$, i.e., amount of training data for the inner loop, is actually comparable to the amount in typical applications of deep learning. Inspired by Chen et al. (2020), we experiment with ImageNet object recognition using raw pixels instead of patches as input tokens. This gives us $n = 224 \times 224 = 50,176$. For Chen et al. (2020), the point of using pixels is to eliminate image-specific prior knowledge. At a high level, the progress in deep learning over the past decade can be seen as gradually eliminating human priors, in favor of general methods that take advantage of data and compute. Following their setting, we use learned positional embeddings, instead of engineered positional encoding. Therefore, our entire system is permutation invariant. While Chen et al. (2011) do not use any data augmentation, they use a much larger collection of images. We have been able to remove the augmentations except one – random resize crop (Szegedy et al., 2015), without which all methods fail to get more than 40% accuracy. Since random resize crop does not add any synthetic artifact to natural images, we justify it as using more data without actually using another dataset. We always use random resize crop for the rest of the subsection. Experiments in this subsection are conducted with ViT-Tiny unless noted otherwise, because training with 50k tokens per instance is very compute-intensive. Every other aspect of our recipe follows Beyer et al. (2022), like in Subsection 5.1. Our results are in Table 3. Self-attention, which performed the best with patches, cannot fit in memory. Even if memory was not an issue, it would still need at least $200 \times$ more FLOPs than linear attention according to our estimations. We highlight two results. First, taking $T = 4$ steps of SGD improves accuracy by 3.3% on top of MTTT-MLP with $T = 1$, without costing extra FLOPs. To the best of our knowledge, this improvement cannot be explained through any existing perspective without an explicit inner loop. While transformers have already eliminated the locality prior in convolutions, most papers on ImageNet still use patches instead of pixels as input tokens. This is equivalent to a first layer of convolutions where the filter size and stride size both equal to the patch size, and is in fact often implemented as such. Using raw pixels as input tokens eliminates locality prior completely. | Model | Drop-in layer | Acc. (%) | Params. (M) | FLOPs | |-----------|--------------------------------|----------|-------------|-------| | ViT-Tiny | Self-attn. (Beyer et al., 2022)| - | 5.6 | 200× | | | Linear attn. (Katharopoulos et al.) | 53.7 | 5.6 | 1.0× | | | Linear attn. identity map | 49.9 | 5.6 | 1.0× | | | MTTT-Linear | 50.0 | 5.6 | 1.1× | | | MTTT-MLP | 61.9 | 6.8 | 1.8× | | | MTTT-MLP SGD T = 4 | 65.2 | 6.8 | 1.8× | | ViT-Small | Linear attn. (Katharopoulos et al.) | 54.4 | 21.8 | 3.9× | | | Linear attn. identity map | 55.7 | 21.8 | 3.9× | Table 3: Results on ImageNet from pixels. FLOPs are presented as relative to linear attention. MTTT-MLP with SGD outperforms without by 3.3%, and does not cost extra FLOPs. It improves almost 10% on top of a ViT-Small with linear attention, which uses more than 3× parameters and 2× FLOPs. See Subsection 5.2 for details. Like in Figure 1, our inner-loop loss with SGD steps also behaves like regular learning, as shown in Figure 4 of the appendix. Second, MTTT-MLP with SGD improves almost 10% on top of even a ViT-Small with linear attention, which uses more than 3× parameters and 2× FLOPs. For SGD, $T = 4$ was simply chosen according to the optimal on patches. These pieces of empirical evidence indicate that our perspective is prescriptive, by showing a path to new methods with better performance. It is also predictive, since expectations derived from regular learning accurately explain novel behaviors of the inner loop, without any hyper-parameter tuning. In terms of descriptiveness, MTTT-Linear matches linear attention (identity map) within 0.1%. 6 RELATED WORK 6.1 IN-CONTEXT LEARNING AS EXPLICIT LEARNING To the best of our knowledge, three pieces of prior work (Akyürek et al., 2022; Dai et al., 2022; Von Oswald et al., 2023) have independently proposed the idea that linear transformers can simulate some variant of linear regression on in-context data, as an explanation for in-context learning. Take Von Oswald et al. (2023) as an example. Given a labeled dataset, their work first trains a linear regression model with $T$ gradient steps, then constructs the weights of a $T$-layer linear transformer to produce the same output as the trained linear model. Our work differs in two main aspects: self-supervision and direction of claims. First, prior work focuses on showing that (linear) transformers can simulate learning on specific, supervised objectives, e.g. ridge regression, so their constructions rely on labeled pairs of in-context training data. If there is a meta-learning component, it is restricted to specific hyper-parameters, e.g. the learning rate. On the other hand, our inner loop implements a general objective that itself is mostly learned, so it does not need labeled data. This makes our inner loop less interpretable but more practical. At a higher level, transformers are complex models, and linear models are simple. Prior work uses the complex to construct the simple. Our construction takes the converse direction. In prior work, empirical performance of meta-learning with linear regression has been significantly worse than linear transformers, even on labeled in-context data. Again, with the goal of explaining transformers, their claims often indicate that linear transformers are superior to meta-learning. Our experiments also point towards the converse. Recently, Mahankali et al. (2023); Zhang et al. (2023); Ahn et al. (2023) and Tarzanagh et al. (2023) have further extended the arguments in prior work, therefore inheriting their two aspects above. Tarzanagh et al. (2023), in particular, argues that transformers implement non-parametric learners (SVMs) on labeled data, supporting our intuition in the converse direction. In summary, our paper complements prior work, with the different goal of inspiring potentially more powerful systems. 6.2 Learning at Test Time The idea of learning at test time has a long history in machine learning. One of the earliest instantiations of this idea is Bottou & Vapnik (1992): For each test input, train on its neighbors before making a prediction. This idea continues to be effective for SVMs (Zhang et al., 2006) and large language models (Hardt & Sun, 2023). In computer vision, the general idea of learning at test time has also been applied to specific applications (Jain & Learned-Miller, 2011; Shocher et al., 2018; Mullapudi et al., 2018; Luo et al., 2020; Nitzan et al., 2022). Transductive learning (Gammerman et al., 1998) is the first to articulate our philosophy in Section 1. As stated by Vapnik (2013): “Try to get the answer that you really need, but not a more general one.” Implementation-wise, it uses test data to add constraints to the margin of SVMs (Joachims, 2002; Collobert et al., 2006). This is an example of non-parametric learning at test time, similar to our kernel estimator in Subsection 4.2. However, transductive learning usually needs multiple test instances to be practically effective, unlike our method, which only needs a single instance at a time. Next we have an in-depth discussion of two particular relevant lines of work: TTT and fast weights. 6.2.1 Test-Time Training with Self-Supervision Our inner loop performs TTT with self-supervision, discussed in Section 2. This general framework was first proposed by Sun et al. (2020), with results for supervised learning under distribution shifts. Unlike previous lines of work, TTT can be used in principle with any self-supervised task, on any type of data, for any application, making it particularly suitable for deep learning. Follow-up work has applied TTT to batches of data (Wang et al., 2020a; Liu et al., 2021), and other main tasks like robot manipulation (Hansen et al., 2020) and locomotion (Sun et al., 2021), among others. Particularly relevant to our inner loop, Gandelsman et al. (2022) performs TTT with reconstruction as the self-supervised task, and Wang et al. (2023) applies this method online to video streams. The biggest difference is that our reconstruction task is parameterized for meta-learning. In addition, our inner loop obtains multiple units of learning, $x_1, \ldots, x_n$, out of a single test instance through tokenization. In prior work, each unit of learning is created through either data augmentations or a randomized $\phi$, such as masking random patches (He et al., 2021). 6.2.2 Fast Weights The general idea of fast weights is to update the parameters of a “fast” model on the most relevant data, as opposed to a “slow” model on all data (Hinton & Plaut, 1987; Tieleman & Hinton, 2009), which most people today simply refer to as training or learning. The most relevant data can be the test instance itself, where the update is performed without human supervision at test time. Our work shares the same general idea, but formulates an explicit learning problem for each inner-loop update, with the goal of generalizing to that test instance. To make fast weights “fast”, i.e. efficient, their update rules avoid forming an optimization problem with explicit objectives on the training data, i.e. a learning problem. For example, given each input $x$, one popular update rule for fast weights is to add $xx^T$ (or some variant thereof) (Ba et al., 2016) like in Hebbian learning and Hopfield networks (Hopfield, 1982). In contrast, our update rule for TTT is an explicit training process as its name suggests. Fast weight programmers (FWPs) (Schmidhuber, 1992) produce the updates to fast weights with a “slow” model. MTTT’s outer loop can be seen as training the “slow” model, if its inner loop is viewed as updating fast weights. In particular, FWPs with the Hebbian update rule above are equivalent to linear transformers (Schlag et al., 2021), therefore also to MTTT with linear models. Clark et al. (2022) add a final layer of fast weights to a transformer and train its initialization with a FWP to improve performance on language modeling. Given the broadest definition of FWPs, MTTT with parametric models can be seen as a special case (Kirsch & Schmidhuber, 2021). But the difference in update rules between TTT and fast weights, as discussed, carries over to MTTT and FWPs. Irie et al. (2021) have tried “fast” networks with weights directly produced as output of a “slow” network, without forming a learning problem. In contrast, our inner loop mirrors regular (non-meta) learning. This helps us with empirical intuitions like in Figure 1, and heuristics like output normalization and stochastic gradient descent. REFERENCES Kwangjun Ahn, Xiang Cheng, Hadi Daneshmand, and Suvrit Sra. Transformers learn to implement preconditioned gradient descent for in-context learning. *arXiv preprint arXiv:2306.00297*, 2023. Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. What learning algorithm is in-context learning? investigations with linear models. *arXiv preprint arXiv:2211.15661*, 2022. Jimmy Ba, Geoffrey E Hinton, Volodymyr Mnih, Joel Z Leibo, and Catalin Ionescu. Using fast weights to attend to the recent past. *Advances in neural information processing systems*, 29, 2016. Hangbo Bao, Li Dong, and Furu Wei. Beit: BERT pre-training of image transformers. *CoRR*, abs/2106.08254, 2021. URL https://arxiv.org/abs/2106.08254. Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. *arXiv preprint arXiv:2004.05150*, 2020. Lucas Beyer, Xiaohua Zhai, and Alexander Kolesnikov. Better plain vit baselines for imagenet-1k. *arXiv preprint arXiv:2205.01580*, 2022. Hermanus Josephus Bierens. The nadaraya-watson kernel regression function estimator. (*Serie Research Memoranda; No. 1988-58). Faculty of Economics and Business Administration, Vrije Universiteit Amsterdam.*, 1988. Léon Bottou and Vladimir Vapnik. Local learning algorithms. *Neural computation*, 4(6):888–900, 1992. Leo Breiman, William Meisel, and Edward Purcell. Variable kernel estimates of multivariate densities. *Technometrics*, 19(2):135–144, 1977. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Zongwu Cai. Weighted nadaraya–watson regression estimation. *Statistics & probability letters*, 51(3):307–318, 2001. Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In *International conference on machine learning*, pp. 1691–1703. PMLR, 2020. Minmin Chen, Kilian Q Weinberger, and John Blitzer. Co-training for domain adaptation. In *Advances in neural information processing systems*, pp. 2456–2464, 2011. Yen-Chi Chen. A tutorial on kernel density estimation and recent advances. *Biostatistics & Epidemiology*, 1(1):161–187, 2017. Kevin Clark, Kelvin Guu, Ming-Wei Chang, Panupong Pasupat, Geoffrey Hinton, and Mohammad Norouzi. Meta-learning fast weight language models. *arXiv preprint arXiv:2212.02475*, 2022. Ronan Collobert, Fabian Sinz, Jason Weston, Léon Bottou, and Thorsten Joachims. Large scale transductive svms. *Journal of Machine Learning Research*, 7(8), 2006. Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Zhifang Sui, and Furu Wei. Why can gpt learn in-context? language models secretly perform gradient descent as meta optimizers. *arXiv preprint arXiv:2212.10559*, 2022. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009.
Dgc5RWZwTR
This question is again related to the diversity of tasks. How is your neural solver affected when some tasks are more difficult than others? How does the budget get proportioned then? What happens to the influence matrix then? Can you elaborate on this?
Efficient Training of Multi-task Combinatorial Neural Solver with Multi-armed Bandits Anonymous authors Paper under double-blind review Abstract Efficiently training a multi-task neural solver for various combinatorial optimization problems (COPs) has been less studied so far. In this paper, we propose a general and efficient training paradigm based on multi-armed bandits to deliver a unified combinatorial multi-task neural solver. To this end, we resort to the theoretical loss decomposition for multiple tasks under an encoder-decoder framework, which enables more efficient training via proper bandit task-sampling algorithms through an intra-task influence matrix. Our method achieves much higher overall performance with either limited training budgets or the same training epochs, compared to standard training schedules, which can be promising for advising efficient training of other multi-task large models. Additionally, the influence matrix can provide empirical evidence of some common practices in the area of learning to optimize, which in turn supports the validity of our approach. 1 Introduction Although a generic neural solver for multiple combinatorial optimization problems (COPs) is appealing, this problem is less studied in the literature, and training such a neural solver can be prohibitively expensive, especially in the era of large models. To relieve the training burden and better balance the resource allocation, in this paper, we propose a novel training paradigm via multi-armed bandits (MAB) from a multi-task learning (MTL) perspective, which can efficiently train a multi-task combinatorial neural solver under limited training budgets. To this end, we treat each COP with a specific problem scale as a task and manage to deliver a generic solver handling a set of tasks simultaneously. Different from a standard joint training in MTL, we employ MAB algorithms to select/sample one task in each training round, hence avoiding the complex balancing of losses from multiple tasks. To better guide the MAB algorithms, we employ a reasonable reward design derived from the theoretical loss decomposition for the widely adopted encoder-decoder architecture in MTL. This loss decomposition also brings about an influence matrix revealing the mutual impacts between tasks, which provides rich evidence to explain some common practices in the scope of COPs. To emphasize, our method is the first to consider training a generic neural solver for different kinds of COPs. This greatly differs from existing works focusing on either solution construction (Vinyals et al., 2015; Bello et al., 2017; Kool et al., 2019; Kwon et al., 2020) or heuristic improvement (Lu et al., 2020; Wu et al., 2021b; Agostinelli et al., 2021; Fu et al., 2021; Kool et al., 2022). Some recent works seek to generalize neural solvers to different scales (Hou et al., Li et al., 2021; Cheng et al., 2023; Wang et al., 2023) or varying distributions (Wang et al., 2021; Bi et al., 2022; Geisler et al., 2022), but with no ability to handle multiple types of COPs simultaneously. Experiments are conducted for 12 tasks: Four types of COPs, the Travelling Salesman Problem (TSP), the Capacitated Vehicle Routing Problem (CVRP), the Orienteering Problem (OP) and the Knapsack Problem (KP), and each of them with three problem scales. We compare our approach with single-task training (STL) and extensive MTL baselines (Mao et al., 2021; Yu et al., 2020; Navon et al., 2022; Kendall et al., 2018; Liu et al., 2021a,b) under the cases of the same training budgets and same training epochs. Compared with STL, our approach needs no prior knowledge about tasks and can automatically focus on harder tasks so as to maximally utilize the training budget. What’s more, when comparing with STL under the same training epoch, our approach not only enjoys the cheaper training cost which is strictly smaller than that of the most expensive task, but also shows the generalization ability by providing a universal model to cover different types of COPs. Compared with the MTL methods, our method only picks the most impacting task to train at each time which improves the training efficiency without explicitly balancing the losses. In summary, our contributions can be concluded as follows: (1) We propose a novel framework for efficiently training a combinatorial neural solver for multiple COPs via MAB, which achieves prominent performance against standard training paradigms with limited training resources and can further advise efficient training of other large models; (2) We study the theoretical loss decomposition for the encoder-decoder architecture, leading to the influence matrix reflecting the inherent task relations and reasonable reward guiding the update of MAB algorithms.; (3) We verify several empirical observations for neural solvers from previous works [Kool et al., 2019; Joshi et al., 2021] by the influence matrix, demonstrating the validity and reasonableness of our approach. 2 RELATED WORK Neural solvers for COPs. Pointer Networks [Vinyals et al., 2015] pioneered the application of deep neural networks for solving combinatorial optimization problems. Subsequently, numerous neural solvers have been developed to address various COPs, such as routing problems [Bello et al., 2017; Kool et al., 2019; Lu et al., 2020; Wu et al., 2021b], knapsack problem [Bello et al., 2017; Kwon et al., 2020], job shop scheduling problem [Zhang et al., 2020], and others. There are two prevalent approaches to constructing neural solvers: solution construction [Vinyals et al., 2015; Bello et al., 2017; Kool et al., 2019; Kwon et al., 2020], which sequentially constructs a feasible solution, and heuristic improvement [Lu et al., 2020; Wu et al., 2021b; Agostinelli et al., 2021; Fu et al., 2021; Kool et al., 2022], which provides meaningful information to guide downstream classical heuristic methods. In addition to developing novel techniques, several works [Wang et al., 2021; Geisler et al., 2022; Bi et al., 2022; Wang et al., 2023] have been proposed to address generalization issues inherent in COPs. For a comprehensive review of the existing challenges in this area, we refer to the survey [Bengio et al., 2020]. Multi-task learning. Multi-Task Learning (MTL) aims to enhance the performance of multiple tasks by jointly training a single model to extract shared knowledge among them. Numerous works have emerged to address MTL from various perspectives, such as exploring the balance on the losses from different tasks [Mao et al., 2021; Yu et al., 2020; Navon et al., 2022; Kendall et al., 2018; Liu et al., 2021a,b] designing module-sharing mechanisms [Misra et al., 2016; Sun et al., 2020; Hu & Singh, 2021], improving MTL through multi-objective optimization [Sener & Kolotun, 2018; Lin et al., 2019; Momma et al., 2022], and meta-learning [Song et al., 2022]. To optimize MTL efficiency and mitigate the impact of negative transfer, some research focuses on task-grouping [Kumar & III, 2012; Zamir et al., 2018; Standley et al., 2020; Fifty et al., 2021], with the goal of identifying task relationships and learning within groups to alleviate negative transfer effects in conflicting tasks. On the application level, MTL has been extensively employed in various domains, including natural language processing [Collobert & Weston, 2008; Luong et al., 2016], computer vision [Zamir et al., 2018; Seong et al., 2019], bioinformatics [Xu et al., 2017], and many others. However, there are limited works on solving COPs using MTL. In this work, we highlight research on MTL for COPs and propose a learning framework to concurrently address various types of COPs. Multi-armed bandits. Multi-armed bandit (MAB) is a classical problem in decision theory and machine learning that addresses the exploration-exploitation trade-off. Several algorithms and strategies have been suggested to solve the MAB problem, such as the $\epsilon$-greedy, Upper Confidence Bound (UCB) family of algorithms [Lai et al., 1985; Auer et al., 2002], the Exp3 family [Littlestone & Warmuth, 1994; Auer et al., 1995; Gur et al., 2014], and the Thompson sampling [Thompson, 1933; Agrawal & Goyal, 2012; Chapelle & Li, 2011]. These methods differ in their balance of exploration and exploitation, and their resilience under distinct types of uncertainty. The MAB has been extensively studied in both theoretical and practical contexts, and comprehensive details can be found in [Slivkins et al., 2019; Lattimore & Szepesvári, 2020]. 3 METHOD We consider $K$ types of COPs, denoted as $T^i$ ($i = 1, 2, ..., K$), with $n_i$ different problem scales for each COP. Thus, the overall task set is $\mathcal{T} = \bigcup_{i=1}^{K} T^i := \{T^i_j | j = 1, 2, ..., n_i, i = 1, 2, ..., K\}$. Figure 1: Pipeline of MAB for Solving COPs in view of MTL. We consider four types of COPs: TSP, CVRP, OP, and KP, each with a corresponding header and decoder. The encoder, which is common to all COPs, is also included. For each time step, we utilize the MAB algorithm to select a specific task for training, such as CVRP-100 depicted in the figure. We then obtain the loss for the selected task, perform loss decomposition as detailed in Section 3.1, and construct a reward using the methodology outlined in Section 3.2. Finally, we utilize the reward to update the MAB algorithm. Algorithm 1 MAB for Solving COPs in view of MTL Require: Combinatorial neural solver $S_\Theta$ with parameters $\Theta$, task set $\mathcal{T}$, MAB algorithm $\mathcal{A}(\mathcal{T})$, loss function $L(\Theta)$, number of training loops $L$, update frequency for MAB algorithm $freq$. 1: for $t = 1$ to $L$ do 2: Train $S_{\Theta(t)}$ on task $T^i_j$ selected by $\mathcal{A}(\mathcal{T})$ and store the gradient information $\nabla L^i_j(\Theta(t))$ 3: if $t \mod freq = 0$ then 4: Obtain reward $\vec{r}^i_j$ for each task $T^i_j$ using stored gradients $\{\nabla L^i_j(\Theta(t))\}_{t=t_1}^{t_2}$ following Section 3.2 5: Update $\mathcal{A}(\mathcal{T})$ with reward $\vec{r}^i_j$ for each task $T^i_j$ 6: Clear the record of the gradient information 7: end if 8: end for 9: return Well-trained neural solver $S_\Theta$ For each type of COP $T^i$, we consider a neural solver $S_{\Theta^i}(T^i_j) : T^i_j \rightarrow Y^i_j$, where $\Theta^i$ are the parameters for COP $T^i$, $T^i_j$ and $Y^i_j$ are the input instance the output space for COP $T^i$ with the problem scale of $n_j$ (termed as task $T^i_j$ in the sequel). The parameter vector $\Theta^i = (\theta^{\text{share}}, \theta^i)$ contains the shared and task-specific parameters for the COP $T^i$, and the complete set of parameters is denoted by $\Theta = \bigcup^K_{i=1} \Theta^i$. This parameter notation corresponds to the commonly used Encoder-Decoder framework in multi-task learning in Fig. 1, where $\theta^{\text{share}}$ represents the encoder - shared across all tasks, and $\theta^i$ represents the decoder - task-specific for each task. Given the task loss functions $L^i_j(\Theta^i)$ for COP $T^i$ with the problem scale of $n_j$, we investigate the widely used objective function: $$\min_\Theta L(\Theta) = \sum^K_{i=1} \sum^{n_i}_{j=1} L^i_j(\Theta^i).$$ \hspace{1cm} (1) We propose a general framework based on Multi-Armed Bandits (MAB) to dynamically select tasks during training rounds and a reasonable reward is constructed to guide the selection process. In particular, our approach establishes a comprehensive task relation by the obtained influence matrix, which has the potential to empirically validate several common deep learning practices while solving COPs. Overview. We aim to solve Eq. 1 using the MAB approach. Given the set of tasks $\mathcal{T} = \{T^i_j | j = 1, 2, ..., n_i, i = 1, 2, ..., K\}$, we select an arm (i.e., task being trained) $a_t \in \mathcal{T}$ following an MAB algorithm, which yields a random reward signal $r_t$ that reflects the effect of the selection. The approximated expected reward is updated based on the received rewards. Essentially, our proposed 1 According to the Encoder-Decoder framework, encoder commonly refers to shared models, whereas decoder concerns task-specific modules. In this study, the decoder component comprises two modules: "Header" and "Decoder" as illustrated in Figure 1. method is applicable to any MAB algorithm. The general framework of MAB for solving COPs within the context of Multi-Task Learning (MTL) is outlined in Algorithm 1, and the overall pipeline is illustrated in Figure 1. 3.1 Loss Decomposition In the framework of MAB for solving COPs in view of MTL described in Algorithm 1, the way to design a reasonable reward to guide its update is crucial. In this part, we analytically drive a reasonable reward by decomposing the loss function for the Encoder-Decoder framework in Fig. 1. Following the previous notation, \( \Theta = \bigcup_{i=1}^{K} \Theta^i = \{\theta^{\text{share}}\} \cup \{\theta_i, i = 1, 2, ..., K\} \) are all trainable parameters. We suppose that a meaningful reward should satisfy the following two properties: (1) It can benefit our objective and reveal the intrinsic training signal; (2) When a task is selected, there always has positive effects on it in expectation. The difference on loss function is an ideal choice and previous work has used it to measure the task relationship (Fifty et al., 2021). However, such measurement is invalid in our context because there are no significant differences among tasks (see Appendix B), so using such information may mislead the bandit selection. What’s more, the computation cost of the “lookahead loss” in Fifty et al. (2021) is considerably expensive when frequent reward signals are needed. We instead propose a more fundamental way based on gradients to measure the impacts of training one task upon the others. To simplify the analysis, in Proposition 1, we assume the standard gradient descent (GD) is used to optimize Eq. 1 by training one task at each step \( t \), and then derive the loss decomposition under the encoder-decoder framework. Any other optimization method, e.g., Adam (Kingma & Ba, 2015), can also be used here with small modifications. We leave the detailed proofs for GD and Adam optimizer in Appendix B. **Proposition 1** (Loss decomposition for GD). Using encoder-decoder framework with parameters \( \Theta = \bigcup_{i=1}^{K} \Theta^i = \{\theta^{\text{share}}\} \cup \{\theta_i, i = 1, 2, ..., K\} \) and updating parameters with standard gradient descent: \( \Theta(t+1) = \Theta(t) - \eta_t \nabla L(\Theta(t)) \), where \( \eta_t \) is the step size. Then the difference of the loss of task \( T_j^i \) from training step \( t_1 \) to \( t_2 \): \( \Delta L_j^i(t_1 \rightarrow t_2) = L_j^i(\Theta^i(t_2)) - L_j^i(\Theta^i(t_1)) \) can be decomposed to: \[ \Delta L_j^i(t_1 \rightarrow t_2) = - (\nabla^T L_j^i(\Psi^i(t_1))) \sum_{t=t_1}^{t_2} \mathbb{1}(a_t = T_j^i) \eta_t \nabla L_j^i(\Theta^i(t)) + \nabla^T L_j^i(\Psi^i(t_1)) \sum_{q \neq j} \sum_{t=t_1}^{t_2} \mathbb{1}(a_t = T_q^i) \eta_t \nabla L_q^i(\Theta^i(t)) \] \[ + \nabla^T \theta^{\text{share}} L_j^i(\Psi^i(t_1)) \sum_{p=1}^{K} \sum_{q=1}^{n_p} \sum_{t=t_1}^{t_2} \mathbb{1}(a_t = T_p^q) \eta_t \nabla \theta^{\text{share}} L_p^q(\Theta^p(t)), \] where \( \nabla L(\Theta) \) means taking gradient w.r.t. \( \Theta \) and \( \nabla L_\theta(\Theta) \) means taking gradient w.r.t. \( \theta \subseteq \Theta \), \( \Psi^i(t_1) \) is some vector between \( \Theta^i(t_1) \) and \( \Theta^i(t_2) \) and \( \mathbb{1}(a_t = T_j^i) \) is the indicator function. The idea behind Eq. 2 means the improvement on the loss for task \( T_j^i \) from \( t_1 \) to \( t_2 \) can be decomposed into three parts: (a) effects of training \( T_j^i \) itself w.r.t. \( \Theta^i \); (b) effects of training same kind of COP \( \{T_q^i, q \neq j\} \) w.r.t. \( \Theta^i \); and (c) effects of training other COPs \( \{T_p^q, p \neq i\} \) w.r.t. \( \theta^{\text{share}} \). Indeed, we quantify the impact of different tasks on \( T_j^i \) through this decomposition, which provides the intrinsic training signals for designing reasonable rewards. 3.2 Reward Design and Influence Matrix Construction In this part, we design the reward and construct the intra-task relations based on the loss decomposition introduced in Section 3.1. Though Eq. 2 reveals the signal during training, the inner products of gradients from different tasks can significantly differ at scale (see Appendix F). This will mislead the bandit’s update seriously since improvements may come from large gradient values even when they are almost orthogonal. To address this, we propose to use cosine metric to measure the influence between task pairs. Formally, for task $T_j^i$ from $t_1$ to $t_2$, the influence from training the same type of COP $T_q^i$ to $T_j^i$ is: $$m_q^i(t_1 \rightarrow t_2) = \frac{\nabla T L_j^i(\Psi^i(t_1)) \sum_{t=t_1}^{t_2} \eta_t 1(a_t = T_q^i) \nabla L_q^i(\Theta^i(t))}{||\nabla T L_j^i(\Psi^i(t_1))|| \cdot ||\sum_{t=t_1}^{t_2} \eta_t 1(a_t = T_q^i) \nabla L_q^i(\Theta^i(t))||},$$ (3) and the influence from training other types of COPs $T_p^q$ to $T_j^i$ is: $$m_p^q(t_1 \rightarrow t_2) = \frac{\nabla T_{\text{shared}} L_j^i(\Psi^i(t_1)) \sum_{t=t_1}^{t_2} \eta_t 1(a_t = T_p^q) \nabla L_p^q(\Theta^p(t))}{||\nabla T_{\text{shared}} L_j^i(\Psi^i(t_1))|| \cdot ||\sum_{t=t_1}^{t_2} \eta_t 1(a_t = T_p^q) \nabla L_p^q(\Theta^p(t))||}.$$ (4) Given Eq. [3,4], we denote the influence vector to $T_j^i$ as: $$m_j^i(t_1 \rightarrow t_2) = (\ldots, m_1^i(t_1 \rightarrow t_2), \ldots, m_j^i(t_1 \rightarrow t_2), \ldots, m_n^i(t_1 \rightarrow t_2), \ldots)^T$$ (5) Based on Eq. 5, an influence matrix $M(t_1 \rightarrow t_2) = (\ldots, m_j^i(t_1 \rightarrow t_2), \ldots)^T \in \mathbb{R}^{\sum_{k=1}^{K} n_k \times \sum_{k=1}^{K} n_k}$ can be constructed to reveal the relationship between tasks from time step $t_1$ to $t_2$. There are several properties about influence matrix $M(t_1 \rightarrow t_2)$: (1) $M(t_1 \rightarrow t_2)$ has blocks $M^i(t_1 \rightarrow t_2) \in \mathbb{R}^{n_i}$ in the diagonal position which is the sub-influence matrix of a same kind of COP with different problem scales; (2) $M(t_1 \rightarrow t_2)$ is asymmetry which is consistent with the general understanding in multi-task learning; (3) The row-sum of $M(t_1 \rightarrow t_2)$ are the total influences obtained from all tasks to one task; (4) The column-sum of $M(t_1 \rightarrow t_2)$ are the total influences from one task to all tasks. According to the implication of the elements in $M(t_1 \rightarrow t_2)$, the column-sum of $M(t_1 \rightarrow t_2)$: $$r(t_1 \rightarrow t_2) = 1^T \cdot M(t_1 \rightarrow t_2) \in \mathbb{R}^{1 \times \sum_{k=1}^{K} n_k}$$ (6) actually provides a meaningful reward signal for selecting tasks, which we can use to update the bandit algorithm. Moreover, we denote the update frequency of computing the influence matrix as $\Delta T$ and the overall training time is $n \Delta T$, then an average influence matrix $W$ can be constructed based on influence matrices $\{M(k \Delta T \rightarrow (k + 1) \Delta T), k = 0, 2, \ldots, n - 1\}$ collected during the training process: $$W = \frac{1}{n \Delta T} \sum_{k=1}^{n-1} M(k \Delta T \rightarrow (k + 1) \Delta T),$$ (7) revealing the overall task relations across the training process. When computing the bandit rewards, there remains an issue regarding the approximation of $\nabla T L_j^i(\Psi^i(t_1))$ in equations [3] and [4]. Moreover, there is a lack of theoretical works discussing this issue within the context of neural networks. We propose a heuristic method that relies on the widely accepted assumption in multi-task learning: **Assumption 1.** When using cosine metric on the gradients to measure the similarity between tasks, one task should have the similarity of 1 with itself (Wang et al., 2020; Yu et al., 2020). The training influences determined by Eq. [3] and [4] can be seen as the similarity between tasks measured by cosine metric, therefore we can determine: $$\nabla T L_j^i(\Psi^i(t_1)) = \sum_{t=t_1}^{t_2} \eta_t 1(a_t = T_j^i) \nabla L_j^i(\Theta^i(t))$$ (8) for Eq. [3] when $q = j$ in order to ensure that the self-task similarity $m_j^i(t_1 \rightarrow t_2)$ equals 1. 4 EXPERIMENTS In this section, we conduct a comparative analysis between our proposed method and both single-task training (STL) and extensive multi-task learning (MTL) methods to demonstrate the efficacy of our approach in addressing various COPs under different evaluation criteria. Specifically, we examine two distinct scenarios: (1) Under identical training budgets, we aim to showcase the convenience of our method in automatically obtaining a universal combinatorial neural solver for multiple COPs, circumventing the challenges of balancing loss in MTL and allocating time for each task in STL; (2) Given the same number of training epochs, we seek to illustrate that our method can derive a potent neural solver with excellent generalization capability. Furthermore, we employ the influence matrix to analyze the relationship between different COP types and the same COP type with varying problem scales. Experimental settings. We explore four types of COPs: the Travelling Salesman Problem (TSP), the Capacitated Vehicle Routing Problem (CVRP), the Orienteering Problem (OP), and the Knapsack Problem (KP). Detailed descriptions can be found in Appendix A. Three problem scales are considered for each COP: 20, 50, and 100 for TSP, CVRP, and OP; and 50, 100, and 200 for KP. We employ the notation “COP-scale”, such as TSP-20, to denote a particular task, resulting in a total of 12 tasks. We emphasize that the derivation presented in Section 3.1 applies to a wide range of loss functions encompassing both supervised learning-based and reinforcement learning-based methods. In this study, we opt for reinforcement learning-based neural solvers, primarily because they do not necessitate manual labeling of high-quality solutions. As a representative method in this domain, we utilize the Attention Model (AM) (Kool et al., 2019) as the backbone and employ POMO (Kwon et al., 2020) to optimize its parameters. Concerning the bandit algorithm, we select Exp3 and the update frequency is set to 12 training batches. We discuss the selection of the MAB algorithms and update frequency in Appendix C, with details on training and configuration in Appendix E. 4.1 Comparison with Single Task Training and Multi Task Learning In this part, we explore the differences in performance between our method, MTL, and STL across various comparison criteria, highlighting our method’s superior efficiency and generalization ability. Comparison under same training budgets. We now consider a practical scenario with limited training resources available for neural solvers for all tasks. Our method addresses this challenge by concurrently training all tasks using an appropriate task sampling strategy. However, establishing a schedule for STL is difficult due to the lack of information regarding resource allocation for each task, and MTL methods are hindered by efficiency issues arising from joint task training. In this section, we compare our method with naive STL and MTL methods in terms of the optimality gap: \[ gap\% = \left| \frac{\text{obj}}{\text{gt}} - 1 \right| \times 100, \] averaged over 10,000 instances for each task under an identical training time budget. The total training time budget is designated as \( B \), with each type of COP receiving resources equitably for \( \frac{B}{T} \) within the STL framework. Two schedules are considered for the allocation of time across varying problem scales for the same category of COP: (1) Average allocation, denoted as STL\(_{\text{avg}}\), indicating a uniform distribution of resources for each task; (2) Balanced allocation, denoted as STL\(_{\text{bal}}\), signifying a size-dependent resource assignment with a 1:2:3 ratio from small to large problem scales, categorizing tasks into easy-median-hard levels. The first schedule is suitable for realistic scenarios where information regarding the tasks is unavailable, while the second is advantageous when prior knowledge is introduced. To mitigate the impact of extraneous computations, we calculate the time necessary to complete one epoch for each task and convert the training duration into the number of training epochs for STL. Utilizing the same device, the training time for each task with STL and MTL methods can be found in Table 1 and Table 3. We assess three distinct training budgets: (1) Small budget: the time required to complete 500 training epochs using our method, approximately 1.59 days in GPU hours; Table 1: Training time per epoch, represented in minutes. The COPs are classified into three scales: small, median, and large, which correspond to the sizes of 20, 50, and 100, respectively (50, 100, and 200 for KP). | COP | Small | Median | Large | |-----|-------|--------|-------| | TSP | 0.19 | 0.39 | 0.75 | | CVRP| 0.27 | 0.50 | 0.90 | | OP | 0.20 | 0.41 | 0.60 | | KP | 0.34 | 0.61 | 1.10 | Table 2: Comparison among our proposed method, multi-task learning (MTL), and single task training (STL) utilizing the same training budget. Specifically, STLavg. and STLbal. denote the allocation of resources, with an even distribution and a balanced allocation ratio of 1 : 2 : 3, respectively, among tasks with varying scales from small to large. The reported results depict the optimality gap (↓) in the main aspects. | Method | TSP20 | TSP50 | TSP100 | CVRP20 | CVRP50 | CVRP100 | OP20 | OP50 | OP100 | KP50 | KP100 | KP200 | Avg. Gap | |------------|-------|-------|--------|--------|--------|---------|------|------|-------|------|-------|-------|----------| | STLavg. | 0.009%| 0.346%| 3.934% | 0.405% | 2.292% | 5.890% | 1.075%| 1.291%| 5.674% | 0.029%| 0.015%| 0.017%| 1.573% | | STLbal. | 0.019%| 0.346%| 2.967% | 0.599% | 2.292% | 4.774% | 1.073%| 1.291%| 4.771% | 0.033%| 0.015%| 0.016%| 1.346% | | Naive-MTL | 0.029%| 0.725%| 3.427% | 0.676% | 2.455% | 4.396% | 0.445%| 2.607%| 5.564% | 0.036%| 0.014%| 0.016%| 1.624% | | Bandit-MTL | 0.035%| 0.401%| 2.817% | 0.717% | 2.346% | 4.460% | 0.153%| 1.148%| 5.486% | 0.036%| 0.014%| 0.016%| 1.296% | | PCGrad | 0.230%| 0.762%| 4.476% | 1.051% | 2.817% | 5.606% | 0.626%| 2.773%| 7.735% | 0.041%| 0.018%| 0.022%| 1.046% | | UW | 0.036%| 0.394%| 1.905% | 0.451% | 1.667% | 3.291% | 0.562%| 1.776%| 3.989% | 0.039%| 0.016%| 0.022%| 1.085% | | CAGrad | 0.634%| 3.209%| 8.433% | 1.417% | 4.631% | 7.668% | 0.536%| 4.516%| 8.232% | 0.048%| 0.024%| 0.063%| 3.284% | | IMTL | 27.53%| 53.71%| 77.15% | 175.3% | 345.3% | 560.3% | 8.634%| 31.43%| 53.6% | 71.8% | 125.4%| | | | Nash-MTL | 0.131%| 0.280%| 0.858% | 0.466% | 2.852% | 1.471% | 3.486%| 7.412%| 0.045%| 0.016%| 0.021%| 0.206% | | Random | 0.041%| 0.402%| 1.75% | 0.489% | 1.797% | 3.298% | 0.987%| 0.794%| 2.488% | 0.032%| 0.014%| 0.015%| 0.862% | | Ours | 0.030%| 0.297%| 1.687% | 0.422% | 1.554% | 2.861% | 1.081%| 0.533%| 2.153% | 0.031%| 0.014%| 0.014%| 0.710% | (2) Medium budget: 1000 training epochs, consuming 3.28 days in GPU hours; and (3) Large budget: 2000 training epochs, spanning 6.64 days in GPU hours. Extensive MTL baselines are considered here: Bandit-MTL (Mao et al., 2021), PCGrad (Yu et al., 2020), Nash-MTL (Navon et al., 2022), Uncertainty-Weighting (UW) (Kendall et al., 2018), CAGrad (Liu et al., 2021a) and IMTL (Liu et al., 2021b). We also involve the random policy which samples the task uniformly at each training slot, and the results are presented in Table 2. In general, our method outperforms MTL and STL methods in terms of average gap across all the budgets used. Specifically, our method yields consistent improvements for 10 out of 12 tasks under the small budget, 8 and 7 out of 12 tasks under the medium and large budget. Moreover, our approach demonstrates a stronger focus on more challenging problems, as it attains greater improvements for larger problem scales compared to smaller ones. What’s more, when comparing with all MTL methods, our method demonstrates two superior advantages: • Better performance on the solution quality and efficiency: In Table 2 typical MTL methods fail to obtain a powerful neural solver efficiently, and some of them even work worse than naive MTL and STL in limited budgets; • More resources-friendly: The computation complexity of typical MTL methods grows linearly w.r.t. the number of tasks, conducting these training methods still needs heavy training resources (High-performance GPU with quite large memories). The exact training time for one epoch w.r.t. GPU hour are listed in Table 3. Under the same training setting, intermediate termination of prolonged training epoch for typical MTL methods incurs wasted computation resources. However, our method trains only one task at each time slot, resulting in rapid epoch-wise training that facilitates flexible experimentation and iteration. Table 3: Time consumption for MTL methods w.r.t. the GPU hours for training one epoch in average. | Method | GPU Hours | |------------|-----------| | Bandit-MTL | 1.04 | | PCGrad | 6.02 | | Nash-MTL | 5.87 | | UW | 1.00 | | IMTL | 5.61 | | CAGrad | 5.24 | | Ours | 0.07 | It’s also interesting to see that the random policy outperforms STL and the best-performing MTL baselines in our context, underscoring the positive effects of changing the training paradigm. Fur- 2Detailed analysis about the computation complexity of each MTL method is in Appendix D. Figure 2: A comparison between single task training (STL) and our method is showcased in this figure, with both methods utilizing the same number of training epochs (1000 in this case). While STL achieves superior performance, our method is capable of effectively tackling all tasks simultaneously, as evidenced by the strong mean results it produces. Furthermore, our proposed method surpasses the random policy, providing evidence of the additional improvements achieved through the integration of the bandit algorithm. As the training budgets increase, STL’s advantages become evident in easier tasks such as TSP, CVRP-20, OP-20, and KP-50. However, our method continues to deliver robust results for more difficult tasks like CVRP-100 and OP-100. Simultaneously, we observe a decrease in gain as the budget expands, aligning with our understanding that negative transfer exists among different tasks. In addition to performance gains, the most notable advantage of our approach is that it does not require prior knowledge of the tasks and is capable of dynamically allocating resources for each task, which is crucial in real-world scenarios. When implementing STL, biases are inevitably introduced with equal allocation. As demonstrated in Table 2, the performance of two distinct allocation schedules can differ significantly: STL-bal. consistently outperforms STL-avg. due to the introduction of appropriate priors for STL. Table 4: The comparison results are obtained by training our model for 1000 epochs and STL models for 100 epochs each, amounting to a total of 1200 epochs. | | TSP20 | TSP50 | TSP100 | CVRP20 | CVRP50 | CVRP100 | OP20 | OP50 | OP100 | KP50 | KP100 | KP200 | Avg. Gap | |-------|-------|-------|--------|--------|--------|---------|------|------|-------|------|-------|-------|----------| | STL | 0.011%| 0.244%| 1.578% | 0.465% | 1.706% | 3.194% | -1.133%| 0.781%| 2.898%| 0.026%| 0.013%| 0.01237%| 0.316% | | Ours | 0.019%| 0.202%| 1.086% | 0.348% | 1.284% | 2.362% | -1.114%| 0.224%| 1.277%| 0.030%| 0.012%| 0.01236%| 0.478% | Comparison under same training epochs. We conduct a comparison under the same number of training epochs by training our method on 12 tasks mentioned before for 1000 epochs in total, and comparing them with corresponding Single Task Learning (STL) neural solvers that are trained for 1000 epochs on each of their respective tasks. This is, by no means, a fair comparison, as our method dynamically chooses a task to train for 1000 epochs, resulting in a much smaller sample size than each task when using STL. Despite this, we choose this comparison as an intuitive way to demonstrate the superior generalization ability of our method under such extreme conditions. We present the results in Figure 2 and Table 4. Compared to individual tasks, shown in Table 4, our method (trained 1000 epochs) consistently outperforms STL (trained $100 \times 12 = 1200$ epochs) across most tasks, with exceptions noted in TSP20, OP20, and KP50. In most cases, our method’s performance is equivalent to that of using 100 to 300 epochs of STL. However, STL can only obtain one model in this context and lacks the ability to handle different types of COPs or to generalize well when presented with the same type of COP but with varying problem scales. As a result, our method demonstrates unparalleled superiority in three ways: (1) when considering the average performance on all problem scales for each type of COP, our method obtains the best results in CVRP, OP, and KP, and is equivalent to the results achieved by training TSP for about 500 epochs. This showcases our method’s excellent generalization ability for problem scales; (2) Our method can handle various types of COPs under the same number of training epochs, which is impossible for STL due to the existence of task-specific modules; (3) Our method’s training time is strictly shorter than the longest time-consuming task. 4.2 Study of the Influence Matrix Our approach has an additional advantage as it facilitates the identification of the task relationship through the influence matrix developed in Section 3.2. The influence matrix allows us to capture the inherent relationship among tasks. Additionally, we provide empirical evidence pertaining to the experience and observation in the learning to optimize community. We present a detailed view of the influence matrix in Figure 3, revealing significant observations: (1) Figure 3a highlights that the Figure 3: This figure provides a visual representation of the mutual influence between tasks. The left-hand side displays the average influence matrix, as defined in Eq. 7, which reveals significant mutual influences existing among the COPs of the same type. Meanwhile, the right-hand side illustrates the influence value, as defined in Eqs. 3-4, throughout the training process, further demonstrating the extensive mutual impacts among the COPs of the same type and the less pronounced interactions between COPs of different types. The influence matrix computed using Eq. 7 possesses a diagonal-like block structure. This phenomenon suggests a strong correlation between the same type of COP with different problem scales, which is not present within different types of COPs due to the corresponding elements being insignificant. Furthermore, within the same type of COP, we observe that the effect of training a task on other tasks lessens with the increase in the difference of problem scales. Hence, training combinatorial neural solvers on one problem scale leads to higher benefits on similar problem scales than on those that are further away. For instance, the influence of training TSP-20 on TSP-50 is 0.1007, which is higher than the influence on TSP-100, which is $-0.1196$. Similarly, training TSP-100 on TSP-50 has a larger influence than that on TSP-20, as can be observed from influences of $-0.0354$ and $-0.0978$, respectively; (2) Figure 3b presents a visualization of the influence resulting from Eq. 3-4 over the course of the training process. Each point in the chart represents the influence of a particular task on another task at a specific time step. Notably, tasks belonging to the same type of COP are highly influential towards each other due to the large variance of their influence values. Conversely, influences between different types of COPs are negligible, evident from the influence values being concentrated around 0. This striking observation showcases that the employed combinatorial neural solver and algorithm, AM (Kool et al., 2019) and POMO (Kwon et al., 2020), segregate the gradient space into distinct orthogonal subspaces, and each of these subspaces corresponds to a particular type of COP. Furthermore, this implies that the gradient of training each variant of COP is situated on a low-dimensional manifold. As a result, we are motivated to develop more parameter-efficient neural solver backbones and algorithms. 5 CONCLUSIONS In the era of large models, training a unified neural solver for multiple combinatorial tasks is in increasing demand, whereas such a training process can be prohibitively expensive. In this paper, given limited training budgets or resources, we propose an efficient training framework to boost the training of unified multi-task combinatorial neural solvers with a multi-armed bandit sampler. To achieve this, we perform the theoretical loss decomposition, resulting in the meaningful influence matrix that can reveal the intrinsic task relations among different COP tasks, providing evidence for several empirical observations in the area of learning to optimize. We believe that this framework can be powerful for multi-task learning in a broader sense, especially in scenarios where resources are limited, and generalization is crucial. It can also help analyze task relations in the absence of priors. Furthermore, the proposed framework is model-agnostic, which makes it applicable to any existing neural solvers. Different neural solvers may produce varying results on the influence matrix, and a perfect neural solver may gain mutual improvements even from different types of COPs. Therefore, there is an urgent need to study the unified backbone and representation method for solving COPs. REFERENCES Forest Agostinelli, Alexander Shmakov, Stephen McAleer, Roy Fox, and Pierre Baldi. A* search without expansions: Learning heuristic functions with deep q-networks. arXiv preprint arXiv:2102.04518, 2021. Shipra Agrawal and Navin Goyal. Analysis of thompson sampling for the multi-armed bandit problem. In Shie Mannor, Nathan Srebro, and Robert C Williamson (eds.), COLT 2012 - The 25th Annual Conference on Learning Theory, June 25-27, 2012, Edinburgh, Scotland, volume 23 of JMLR Proceedings, pp. 39.1–39.26. JMLR.org, 2012. URL http://proceedings.mlr.press/v23/agrawal12/agrawal12.pdf. Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Proceedings of IEEE 36th annual foundations of computer science, pp. 322–331. IEEE, 1995. Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47:235–256, 2002. Irwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, and Samy Bengio. Neural combinatorial optimization with reinforcement learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=Bk9mxISFZ. Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: a methodological tour d’horizon. European Journal of Operational Research, 2020. Lilian Besson. SMPyBandits: an Open-Source Research Framework for Single and Multi-Players Multi-Arms Bandits (MAB) Algorithms in Python. Online at: github.com/SMPyBandits/SMPyBandits, 2018. URL https://github.com/SMPyBandits/SMPyBandits/. Code at https://github.com/SMPyBandits/SMPyBandits/, documentation at https://smpybandits.github.io/. Jieyi Bi, Yining Ma, Jiahai Wang, Zhiguang Cao, Jinbiao Chen, Yuan Sun, and Yeow Meng Chee. Learning generalizable models for vehicle routing problems via knowledge distillation. arXiv preprint arXiv:2210.07686, 2022. Olivier Chapelle and Lihong Li. An empirical evaluation of thompson sampling. Advances in neural information processing systems, 24, 2011. Hanni Cheng, Haosi Zheng, Ya Cong, Weihao Jiang, and Shiliang Pu. Select and optimize: Learning to solve large-scale tsp instances. In International Conference on Artificial Intelligence and Statistics, pp. 1219–1231. PMLR, 2023. Ronan Collobert and Jason Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In William W. Cohen, Andrew McCallum, and Sam T. Roweis (eds.), Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008, volume 307 of ACM International Conference Proceeding Series, pp. 160–167. ACM, 2008. doi: 10.1145/1390156.1390177. URL https://doi.org/10.1145/1390156.1390177. Chris Fifty, Ehsan Amid, Zhe Zhao, Tianhe Yu, Rohan Anil, and Chelsea Finn. Efficiently identifying task groupings for multi-task learning. Advances in Neural Information Processing Systems, 34: 27503–27516, 2021. Zhang-Hua Fu, Kai-Bin Qiu, and Hongyuan Zha. Generalize a small pre-trained model to arbitrarily large TSP instances. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pp. 7474–7482. AAAI Press, 2021. URL https://ojs.aaai.org/index.php/AAAI/article/view/16916.
mCnWT9OVvK
What are the main actionable conclusions from your work that any researcher working on multi-document retrieval-augmented long-form question answering systems should know? For instance, - what does Figure 3.(a) imply in terms of optimal ordering of documents presented to the LFQA system? - does retrieving and using longer documents imply an improvement of end-to-end quality? - how do various models handle different degrees of noise (irrelevant documents) in their context?
UNDERSTANDING RETRIEVAL AUGMENTATION FOR LONG-FORM QUESTION ANSWERING Anonymous authors Paper under double-blind review ABSTRACT We present a study of retrieval-augmented language models (LMs) on long-form question answering. We analyze how retrieval augmentation impacts different LMs, by comparing answers generated from models while using the same evidence documents, and how differing quality of retrieval document set impacts the answers generated from the same LM. We study various attributes of generated answers (e.g., fluency, length, variance) with an emphasis on the attribution of generated long-form answers to in-context evidence documents. We collect human annotations of answer attribution and evaluate methods for automatically judging attribution. Our controlled study provides new insights on how retrieval augmentation impacts long, knowledge-rich text generation of LMs. We further reveal novel attribution patterns for long text generation and analyze the main culprits of attribution errors. Together, our analysis reveals how retrieval augmentation impacts long knowledge-rich text generation and provide directions for future work. 1 INTRODUCTION Long-form question answering (LFQA) is designed to address any type of question that could be asked. Instead of extracting spans in the evidence document, LFQA systems generate paragraph-long, complex answers to questions by leveraging parametric knowledge in large language models (LLMs) and retrieved documents provided at inference time. In recent years, we learned surprisingly impressive – yet brittle (Ji et al., 2023; Liu et al., 2023b) – LFQA capabilities of large-scale LLMs. Recent work (Nakano et al., 2021) proposes retrieval as a powerful tool to provide up-to-date, relevant information to LMs. Yet, our understanding of how retrieval augmentation impacts generation in LMs is limited, and retrieval augmentation does not always affect LMs the way we anticipate. Liu et al. (2023a) discovered how information placed in the middle of contexts is not used by LMs and a line of work (Chen et al., 2022; Longpre et al., 2021) showed parametric knowledge continues to affect generation even when relevant document is provided in-context for factoid QA task. We study how retrieval impacts answer generation for LFQA, a complex long text generation task. We present two controlled study settings (illustrated in Figure 1): one fixing the LM and varying evidence documents and the other fixing evidence documents and varying the LMs. As evaluating the quality of LFQA is notoriously difficult (Krishna et al., 2021), we start our analysis by measuring surface features (e.g. length, perplexity) that correlate with specific answer qualities such as coherence (Xu et al., 2023). One desirable property of retrieval augmented LFQA system is whether the generated answer can be attributed to provided evidence documents. To evaluate this, we newly collect human annotations on sentence-level attribution (Rashkin et al., 2021) and evaluate off-the-shelf models for detecting attributions (Schuster et al., 2021) on our collected dataset (Section 7). Our analysis on surface patterns reveals that retrieval augmentation changes LM’s generation substantially. Some effects, e.g., change in the length of generated answers, were pronounced even when provided documents are not relevant. Relevant in-context evidence documents lead to more substantial changes, leading LMs to generate more unexpected sentences (measured by higher perplexity), while irrelevant document does not have the same effects. The impact of retrieval augmentation, even with the same set of evidence documents, can result in opposite effects for different base LMs. We provide an in-depth analysis of attribution with our newly annotated dataset, which can serve as a benchmark for evaluating attribution. We observe NLI models that performed well in detecting Figure 1: We study (A) how differing LMs use the same in-context evidence documents to generate answer and (B) how in-context evidence documents of various degree of relevance affect the answer generation. We analyze generated answers on surface patterns (self-bleu, perplexity, etc) and their attribution to evidence documents. Attribution judgements are made per sentence, either by annotators (Section 5) or automatically from NLI model (Section 7). O’s, Δ’s and X’s denote supported, partially supported and unsupported sentences respectively. Colored texts are unsupported contents. attribution in factoid QA (Bohnet et al., 2022) performs competitively in LFQA setting as well, significantly outperforming chance, yet falls behind human agreement by 15% in accuracy. Our study reveals that attribution quality varies significantly across base LMs, even when they are provided with the same set of documents. Further, we find that a model (Nakano et al., 2021) that are trained with retrieval augmentation are more faithful to the evidence documents, and that LMs can ignore irrelevant evidences when needed. We provide new insights on attribution patterns for long text generation. For instance, the last generated sentence is substantially less attributable than earlier sentences, and the generated text roughly follows the order of the in-context evidence documents, even when the in-context document is a concatenation of multiple documents. Taken together, our analysis provides a better understanding of how LMs use in-context evidence documents for long-form question answering and concrete directions for future work in this domain. 2 BACKGROUND AND RELATED WORK LFQA (Fan et al., 2019; Stelmakh et al., 2022) requires models to generate paragraph-length answers to complex, open-ended questions. Combining the challenges of information retrieval and text generation, LFQA remains difficult and an under explored topic in NLP. Prior work (Krishna et al., 2021) suggested retrieval augmented models largely ignore retrieved documents during generation. More recent work, WebGPT (Nakano et al., 2021), introduced a web agent that searches the web and integrate the information to LMs. We evaluate the behavior of this model closely. Retrieval augmented generation has received attention as a way to provide up-to-date, relevant documents to language models at inference time (Ram et al., 2023), showing consistent performance gains across multiple tasks (Shi et al., 2023). A line of work investigates how LMs incorporate in-context documents (Mallen et al., 2023; Liu et al., 2023a) with their parametric knowledge on simpler tasks such as factoid QA. Wang et al. (2023) studies the impact of retrieval in open-ended text generation with kNN-LM (Khandelwal et al., 2019). In this work, we focus on LFQA, which requires factual, attributable generation over long sequences of tokens. Our study put an emphasis on attribution of long-form answers with respect to the prepended evidence document set. We follow the AIS framework (Rashkin et al., 2021), an evaluation framework for whether a system generated text can be derived by a given knowledge source. Bohnet et al. (2022) and Yue et al. (2023) study automatically evaluating attribution; the former uses entailment models, while the latter prompts LLMs. Gao et al. (2023b) builds QA models that generate text along with citations and evaluates the citation quality of the generations automatically. Related to our work, Bohnet et al. (2022) presents a controlled study of attribution (e.g., varying evidence documents and how it impact attribution) on factoid QA with Wikipedia as retrieval corpus. Recent work (Liu et al., 2023b) annotates attribution quality in long-form answers generated from commercial generative search engines. While they provide a comprehensive study on attribution quality with manual annotations, their study on black box models is limited, as they do not have knowledge of how the cited documents were integrated into the language models. For instance, cited documents could have been retrieved post hoc (Gao et al., 2023a). We instead present a controlled study involving open sourced models, and analyze their data in Section 6. 3 STUDY SETTING We plan a controlled study on how retrieval augmentation impacts long-form answer generation for LMs, observing surface features and attribution while varying evidence document sets and LMs. In this section, we describe our experimental setting. Dataset We source questions from ELI5 dataset (Fan et al., 2019), which contains questions from the Reddit forum “Explain Like I’m Five”. We use the entire test set released by WebGPT (Nakano et al., 2021) (271 questions) for automatic evaluation (Section 4.7.2), and randomly sample 100 questions to collect manual attribution annotations (Section 5). Knowledge Source: Evidence Documents For each question, we compile four evidence document sets to examine how models use documents of varying degree of relevance. Each document set $D$ contains roughly 3-4 document snippets $\{d_1, d_2, \ldots, d_M\}$, each snippet containing roughly 100 words. The statistics on each set can be found in Appendix A.1. We describe each document set below: • Human Demonstration Trained annotators from prior study (Nakano et al., 2021) used commercial search engine (Bing) to gather evidence documents to answer questions. We include these as “gold” documents that are considered relevant for answering questions by humans. • WebGPT Retriever We consider documents retrieved by the WebGPT (175B) (Nakano et al., 2021) model. Their study found using these documents result in high-quality answer generation. • Bing Search We retrieve relevant documents using Bing Search API with the question as the query, and obtain the top 10 pages returned by the API, and retrieve four 100-word segments from aggregate search results. Post-processing details can be found in Appendix A.2. • Random To simulate a set of irrelevant documents, we randomly sample another question in the test set and take the corresponding documents retrieved by WebGPT. We evaluate the relevance of first three sets of documents manually by sampling 20 questions and examine the document set of each type. We find that WebGPT and Bing documents contains sufficient information to answer the question for 85% and 45% of the examples, respectively. More details on the manual study is in Appendix B.3. Base LMs & Answer Generation We mainly consider three LMs: WebGPT(175b) (Nakano et al., 2021), GPT-3 (text-davinci-003) (Brown et al., 2020) and Alpaca (Taori et al., 2023). The WebGPT model is trained to interact with a commercial search engine (Bing) and compose long-form answers based on the information gathered from the output of the search engine for questions from the ELI5 dataset.\footnote{While their model is not released, the model outputs were provided in https://openaipublic.blob.core.windows.net/webgpt-answer-viewer/index.html} We experimented with a range of open-sourced LMs (GPT-J (Wang & Komatsuakil, 2021) (6B), Flan-T5 (Chung et al., 2022) (11B), Llama (Touvron et al., 2023) (7B, 13B, 30B) and Alpaca 7B (Taori et al., 2023)) and found Alpaca to be the best-performing one upon manual examination.\footnote{This is likely because ELI5 was one of the seed task used to generate training data for Alpaca.} The prediction examples for all other LMs can be found in Table 9 in Appendix B.4. We prepend the concatenated evidence document set to the question and provide it as a prompt to LMs with a brief instruction. We sample three answers for each setting to study answer variability. The decoding hyperparameters and prompts can be found in Appendix A.3. Table 1: Generated answer statistics. We present mean values along with two standard deviations in its subscript: one computed over three answers generated for the same example, one over answers for different examples. Human and WebGPT answer outputs are taken from Nakano et al. (2021), and we generate the rest. We boldface six rows where we collect human annotations for attribution. Numbers in red and blue indicate decrease and increase from the base model respectively. | Model (+ evidence) | # Sentences | # Words | RankGen (↑) | Self-BLEU (↓) | Perplexity (↓) | |--------------------|-------------|---------|-------------|---------------|----------------| | WebGPT(+ WebGPT docs) | 6.7–/1.9 | 160–/33 | 11.35–/1.98 | 0.58–/0.07 | 13.81–/4.86 | | GPT-3 | 9.31.5/2.6 | 21930/51 | 12.770.67/1.87 | 0.710.04/0.06 | 6.130.02/1.37 | | +Human docs | 6.60.9/1.8 | 17218/40 | 11.890.60/1.86 | 0.620.04/0.07 | 10.940.05/3.94 | | +WebGPT docs | 6.80.9/1.8 | 18530/41 | 11.970.60/1.79 | 0.620.04/0.07 | 11.630.13/4.16 | | +Bing docs | 6.91.0/1.9 | 17919/38 | 12.130.68/1.91 | 0.640.04/0.07 | 9.030.12/3.24 | | +Random docs | 7.61.1/2.1 | 18319/39 | 12.400.67/2.13 | 0.680.04/0.07 | 6.760.05/1.86 | | Alpaca-7b | 5.01.8/8.1 | 11333/73 | 12.170.72/2.00 | 0.510.09/0.15 | 11.950.02/7.18 | | +Human docs | 5.71.9/3.6 | 13844/79 | 11.820.88/2.32 | 0.550.09/0.14 | 12.990.20/5.73 | | +WebGPT docs | 6.22.3/7.9 | 14545/80 | 11.910.75/2.07 | 0.550.08/0.14 | 13.270.13/5.68 | | +Bing docs | 7.62.8/5.0 | 18766/107 | 12.040.78/2.05 | 0.590.08/0.14 | 10.810.13/5.34 | | +Random docs | 5.21.6/5.3 | 12132/65 | 12.250.71/1.99 | 0.530.08/0.14 | 11.920.23/5.35 | | Human(+ Human docs) | 5.1–/2.7 | 119–/59 | 9.29–/4.37 | 0.49–/0.17 | 17.63–/7.53 | 4 HOW IN-CONTEXT DOCUMENTS IMPACT SURFACE ANSWER STATISTICS Metrics Unlike evaluating short, mostly entity answers (Rajpurkar et al., 2016; Fisch et al., 2019), evaluating overall quality of long-form answers (Krishna et al., 2021; Xu et al., 2023) is notoriously difficult for both humans and machines. In this section, we look at metrics that has been shown to correlate with specific aspects (e.g., fluency, coherence) (Xu et al., 2023) of answers, to quantify the differences between answers generated from different LMs or with different evidence documents. • Length: We report the number of sentences in the answer as well as number of words. The length is shown as a significant confounding factor in human evaluation for various tasks, with humans often preferring longer answers (Sun et al., 2019; Liu et al., 2022; Xu et al., 2023). • Self-BLEU (Zhu et al., 2018) is a metric that measures the lexical diversity of generated text. An answer is less diverse and contains more repetition if it has a higher Self-BLEU score. Prior work (Xu et al., 2023) also found that lower self-bleu score correlates to better coherence. • RankGen (Krishna et al., 2022) is an encoder (based on T5-XXL) trained with large-scale contrastive learning, ranking generation given a prefix. Higher RankGen score signifies more likely continuation of the prefix. We measure RankGen score between the question and answers. • Perplexity: We report perplexity of the answer measured with GPT-2-XL (Radford et al., 2019). Lower perplexity generally indicates more fluent generated text, though human-written texts (Holtzman et al., 2019) do not necessarily exhibit lower perplexity compared to model generated text. Results Table 1 presents the statistics for answers generated with three base LMs with various evidence documents. We present statistics on seven other LMs in Appendix B.4. Overall, prepending relevant documents yields bigger changes for both models compared to prepending random documents. Prepending unrelated documents has little effect on the automatic metrics for Alpaca, but impacts the generation of GPT-3, especially in length and Self-BLEU. This might be related to instruction tuning enables LMs (Alpaca in this case) to be more robust to irrelevant prompts (Webson & Pavlick, 2022). Using the same set of evidence documents brings different effects on two LMs. On GPT-3, providing documents results in shorter generations and less repetitions, while on Alpaca, it results in longer generations and more repetitions. Yet, on both models, adding relevant documents cause bigger changes in length than adding random documents. Overall, GPT-3 generates longer answers with less variability across examples. Alpaca answers exhibit higher variance across examples, consistently across all metrics. In both models, RankGen scores decrease when document set is more relevant. This can be as model incorporates new information from retrieved documents, generated answers become less predictable from the question alone. Perplexity also shows similar trends, with relevant documents increasing perplexity. This might be because it copies rare tokens from evidence documents, which will get assigned high perplexity when evaluating answer alone. Our finding diverges from Krishna et al. (2021) which showed conditioning on random vs. relevant documents does not bring differences in smaller-scale, fine-tuned retrieval-augmented LM, which fails to incorporate relevant information from retrieved documents into the answer. We will compare attribution patterns in Section 7.2, which again shows substantial difference between two settings. **Similarities Among Answer Generated with Different In-Context Settings** Retrieval-augmented LM combines its parametric knowledge and non-parametric knowledge from evidence documents to address the question (Longpre et al., 2021; Mallen et al., 2023; Zhou et al., 2023). We aim to understand the impact of combining information from evidence documents on generated answers, as opposed to relying solely on parametric knowledge. We thus compare lexical similarities (measured by bigram overlap) and embedding similarity (measured by SimCSE (Gao et al., 2021)) between answers generated with various evidence document settings and answers generated without documents. Figure 2 reports the results (results on all answer pairs can be found in Appendix B.1). To contextualize similarity scores, we provide an upper bound (0.19 for bigram, 0.875 for SimCSE) by computing average similarity between three pairs of samples generated without documents, and a lower bound (0.19 for bigram overlap and 0.15 for SimCSE) by computing the similarity between answers to different questions. According to both metrics, the answers generated without evidence document are most similar to the answers generated with random documents, followed by Bing documents, suggesting more relevant evidence set change answers more substantially. ## 5 COLLECTING ATTRIBUTION ANNOTATION While automatic metrics show that in-context documents influence generation substantially, we lack deeper understanding on how the answers change. In this section, we focus on attribution (Rashkin et al., 2021), which measures how much of generated answer can be entailed from the evidence documents. As automatically measuring attribution is nontrivial, we first collect human annotations. We compare our collected dataset with recent attribution datasets in Appendix C.4. Unlike prior work which conduct annotations on full-fledged system without altering evidence documents to the same LM, our annotation presents multiple evidence document for the same base LM. **Setup** Given a question $x$, generated answer $y$, which consist of $n$ sentences $y_1, y_2, \cdots, y_n$ and a set of reference documents $D$, we aim to label each answer sentence $y_i$ with one of the following: **Supported**, **Partially Supported**, **Not Supported** by $D$. If the sentence is Supported or Partially Supported, the annotator also provides a minimal subset of sentences from $D$ that support the answer sentence. Lastly, the annotator highlights the unsupported span if the sentence is Partially Supported. **Data Collection** We collect annotations for 100 questions randomly sampled from the ELI-5 test set on six LM-retrieval document set configurations, namely WebGPT + {WebGPT docs}; GPT-3 + {No docs, WebGPT docs, Human docs} and Alpaca + {No docs, WebGPT docs}. We use the prepended document set as the reference document, and use WebGPT documents for answers generated without documents. We collect three annotations per example as the task is somewhat subjective and take the majority label, discarding 3.4% of examples without majority vote. The interannotator agreement is reasonably high, with Krippendorff’s alpha at 0.71. More details about crowdsourcing, including recruitment and disagreement patterns, can be found in Appendix C. ## 6 INSIGHTS FROM ATTRIBUTION ANNOTATION RESULTS Equipped with manual annotation, we analyze how much of long-form answers can be attributed to evidence documents. Table 2 summarizes the annotation results. We first compare attribution performance of different models using the same evidence document set (the top section in Table 2). We observe that generations from the WebGPT model is most faithful to the evidence documents. When we look at the Alpaca model, even with the same evidence document setting, the percentage of unsupported sentences is ten times more than WebGPT. Unlike the other two models, WebGPT was fine-tuned for LFQA with evidence document prepended. This suggests that updating LMs under retrieval augmented setting might be helpful for LFQA, echoing findings from prior work in factoid QA (Bohnet et al., 2022) that retrieve-then-read systems which are trained with a retrieval component achieve more faithful generation. Unsurprisingly, answers generated without documents (last two rows) are largely irrelevant to reference document set (WebGPT docs). This does not necessarily mean the model generated answers are not factual, as valid answers to the same question can be very different (Krishna et al., 2021; Xu et al., 2023). Nonetheless, over 20% of sentences were supported by reference documents, suggesting LLMs exhibit level of some parametric knowledge that matches information in the reference documents. Comparing the same base model (GPT-3) provided with different evidence document sets (WebGPT docs vs. Human docs), we find that the model can use WebGPT docs more efficiently. This might be caused by WebGPT evidence documents being longer (about 10%) than human demonstration documents, providing more comprehensive information to copy from. Nonetheless, even with WebGPT docs, 15% of the answer sentences are not supported, suggesting that GPT-3 generates information that lie beyond what can be inferred from evidence documents. Does the order of information presented in the evidence documents impact the order of information presented in the generated answer? If LM is synthesizing information based on content alone, there should be little correlation, considering we provide a concatenated set of evidence documents, not a coherent single document. We plot the correspondence between answer sentence Figure 3: On the top (a)(b), we show the distribution of location of supporting sentences in the document set $D$ for Nth answer sentence chunk. We normalize by the column to visualize the distribution of supporting sentences across evidence documents for each answer sentence chunk. The “Avg” column shows the average across answer sentences, indicating how frequently each document chunk are supporting the answer. We report aggregate results on generation with documents in (a) and without documents (the bottom two generation settings in Table 2) in (b) as a control study. On the bottom, two graphs (c)(d) show the percentage of unsupported sentences by the relative location (sentence index divided by the number of sentences in the answer). Table 2: Attribution Annotation Results: The percentage of each attribution label of answer sentences with respect to their corresponding evidence document sets. For answers generated without documents, the answer were evaluated with WebGPT documents. | Setting | # Ex. | Supportedness | Partially | No | |--------------------------|-------|---------------|-----------|----| | WebGPT + WebGPT docs | 649 | 95% | 2% | 3% | | GPT-3 + WebGPT docs | 659 | 85% | 4% | 11%| | Alpaca + WebGPT docs | 545 | 61% | 7% | 32%| | GPT-3 + Human docs | 661 | 73% | 7% | 20%| | GPT-3 without docs | 896 | 22% | 8% | 70%| | Alpaca without docs | 447 | 23% | 6% | 71%| | Total | 3,857 | 59% | 6% | 35%| Table 3: List of attribution error type (and their frequency of occurrence in unsupported sentences) and example instance. | Retrieval Failure (54%) | Question: Why does it seem like when I watch something the second time around, it goes by faster than the first time I watched it? | Documents: … Basically, the busier you are during a time interval, the faster that time interval will feel like it passed. … (more about time goes by faster when you are not bored…) | Answer Sentence: However, when we watch something for the second time, our brains have had a chance to process the information and are able to make more efficient use of the information. | Explanation: The documents explain why time goes by faster when you are having fun, but the question is asking watching something the second time. | | Hallucinated Facts (72%) | Question: How does money go from my pocket, through the stock market, and to support the business I’ve bought stock from? | Documents: Stocks, or shares of a company, represent ownership equity in the firm, which give shareholders voting rights as well as a residual claim on corporate earnings in the form of capital gains and dividends. … (more about how stock market works) | Answer Sentence: You can purchase shares of the stock from a broker or through an online trading platform. | Explanation: The documents never mention where you can buy stock from. | | Incorrect Synthesis (14%) | Question: Seismologists: How do you determine whether an earthquake was naturally occurring or if it was human induced? | Documents: Studies of the numerous nuclear tests that took place during the Cold War show that explosions generate larger P waves than S waves when compared with earthquakes. Explosions also generate proportionally smaller Surface waves than P waves. | Answer Sentence: Natural earthquakes generate larger P waves and smaller Surface waves compared to nuclear tests. | Explanation: Explosion generate larger P waves, not natural earthquakes. The answer sentence is thus incorrect. Most of it is copied from the documents. | location and their supporting sentences in the evidence document set in Fig. 3(a)(b), by aggregating the supporting sentences sets annotated for each answer sentence. We report supporting sentences locations on both answers generated with documents (Fig. 3(a)) and without documents (Fig. 3(b)), with the focus on the former and the latter being a reference. We identify linear correspondence on answers generated with documents, with information mentioned earlier in the evidence document appears earlier in the generated answer. This suggests the order of evidence documents will be reflected in the order of generated contents. Recent study [Liu et al., 2023a] also showed order sensitivity of in-context augmentation for factoid QA, showing that models ignore information in the middle. We also find that later half of the evidence documents, except for the last portion, are less cited by the generated answer (see Avg. column in Fig. 3). Which parts of the answer are less supported by the evidence documents? Generated long-form answers consist of 5-10 sentences. Would sentences generated earlier more likely to be supported by evidence documents? Fig. 3(c)(d) report the percentage of unsupported sentences by the relative position of the answer sentence on our data and attribution annotation on long-form answers from commercial generative search engines from [Liu et al., 2023b], respectively. We find that the last sentence is almost twice as likely to be unsupported compared to other sentence in the answer. This phenomenon is even more pronounced on [Liu et al., 2023b]. What causes the model to produce unsupported sentences? We manually examine 30 answer sentences labeled as Not Supported for each setting that has access to evidence documents. We identify three categories of unsupported sentences: retrieval failure, hallucinated facts, and incorrect synthesis. Table 3 provides description for each category along with an example. In Table 6 in the appendix, we further provide breakdown of error types for each generation setting. During our analysis, we found that about 14% of errors corresponds to annotation error. We found that attribution error happens more frequently when the retrieved documents do not provide sufficient evidences for answering the question. Generating ungrounded concepts is a more common cause of unsupported sentences than incorrectly synthesizing information from incompatible documents. However, incorrect synthesis happens relatively more frequently in WebGPT model, --- 3We analyze all unsupported answer sentences generated by WebGPT, as there are only 17 of them in total. 4Categories are not mutually exclusive (one can contain irrelevant documents and combine facets from each). potentially as it grounds its information more heavily from the documents. This suggests multi-document summarization and synthesis is an important direction for future work, especially for more faithful retrieval-augmented LMs. 7 AUTOMATICALLY IDENTIFYING UNSUPPORTED SENTENCES Annotating attribution requires careful reading over multiple documents and comparison between two texts. Recent prior work (Bohnet et al., 2022; Gao et al., 2023a; 2022) showed that fine-tuned models from NLI datasets can successfully automate this process. We investigate automatic identification of unsupported answer sentences in LFQA domain with our collected dataset. 7.1 EVALUATING AUTOMATIC ATTRIBUTION METHODS Setting Given a question \( q \), reference documents \( D \) and answer sentence \( y_i \), the evaluated system should predict if each answer sentence \( y_i \) is supported by \( D \). We merge Partially Supported and Not Supported into a single class and consider it as a target label. We report micro average F1 score, which is computed over the set of predictions and labels of all the answer sentences for each generation setting in Section 5 separately, as model performances vary greatly per dataset. We report accuracy in Appendix B.3, which shows similar trends. Comparison Systems We evaluate methods for automatically evaluating attribution. First we establish lower and upper bounds, and introduce existing methods. No model is fine-tuned for our task, but we chose one hyperparameter, threshold for deciding supportedness or not. - Baselines As a lower bound, we provide random (which randomly assigns labels for each answer sentence according to the label distribution in each dataset) and majority baseline (which assigns the majority label for all instances). - Human Performance We report the human performance by taking one set of the annotations as the prediction set and another set of annotations as the label set. We compute the F1 score, and take an average across three possible pairs. - NLI models Prior works (Schuster et al., 2022; Laban et al., 2022; Gao et al., 2023b) showed off-the-shelf NLI models can be used to identify generated sentences that are not supported by the evidence document set. We evaluate four NLI model variants: two RoBERTa-large (from Nie et al., 2020) and (Yin et al., 2021), ALBERT-xlarge (from Schuster et al., 2021), and T5-11B (from Honovich et al., 2022) trained on a combination of NLI datasets. While most NLI models compare a pair of sentences, our setting compares a set of documents (as hypothesis) and a sentence (as premise). For the models except the RoBERTa-large trained on DocNLI (Yin et al., 2021), we follow Schuster et al. (2022), which makes entailment decisions for each document sentence and answer sentence pair and aggregates the results by taking the maximum value over all the pairs. The details on training the NLI models can be found in Appendix A.4. - QAFactEval (Fabbrì et al., 2022): is a QA-based factual consistency metric for summarization. It indicates how consistent the generations are with respect to the given documents. We use long-form answer in place of the summary, measuring whether the questions generated from long-form answer are answerable by the given documents. Results We report model performances in Figure 4a, with each box representing the performance of an approach. Each dot in the box is the score on each answer generation setting. We report the exact scores in Table 7 in the appendix. We find all methods outperform simple baselines (majority, random) by a large margin, but none comes close to human agreements. As in factoid QA setting (Bohnet et al., 2022), we find that T5 model achieves very competitive performances across datasets, achieving an average F1 over 60 and accuracy over 80%. While developed for a different domain (summarization), QAFactEval performs relatively well. 7.2 APPLYING AUTOMATIC ATTRIBUTION METHODS Having discovered that trained T5 model achieves competitive performances in predicting attribution, we use the T5 model as approximation for human judgement on attribution in generation settings evaluated in Table 1 following (Gao et al., 2023b; Bohnet et al., 2022), as human annotations are costly. This complement human annotation results in Section 6. We quantify how frequently the answer sentences are supported by different types of documents using the T5 model. In Figure 4b, we present the results of attribution predicted by the T5 model (along with gold human attribution score if exists). We find answers generated with random documents as evidence (last row in each block) exhibit similar attribution pattern with answers generated without documents (first row in each block). This suggests that models successfully ignore irrelevant documents, and retain similar level of attribution to relevant document, especially for GPT-3 base model. Providing noisy, yet relevant document set (+Bing docs) still does not meaningfully change attribution pattern with respect to the other documents (Human docs, WebGPT docs, Random docs), yet increases supportedness towards provided evidence document set (Bing). Adding WebGPT docs brought the highest change in both models, both in terms of attribution towards other relevant documents (Human) and towards the provided document set. Adding human document also shows similar trends but less impact, potentially as it contains less information than WebGPT docs. 8 Conclusion / Suggestions for Future Work We present an extensive study on retrieval-augmented LM generation in the context of LFQA. Our analysis suggests concrete directions for future work. First, LMs trained without retrieval and attribution in mind does not always generate sentences that can be attributed to in-context evidence documents, even when provided relevant documents only. This motivates designing and training LMs after introducing in-context evidence documents. Analyzing patterns of unsupported sentences, we find that injecting faithful multi-document synthesis ability to LLM can be an important direction for future work. Second, we find evidence document should be carefully added to LMs. For example, the order of information presented in evidence documents will impact the order of information presented in the generated answer. And even prepending irrelevant documents meaningfully change the surface statistics of generated answers, though attribution percentage to relevant documents remains somewhat stable. We find attribution error is more common when prepended documents without sufficient information, motivating improving retriever. Also, retrieval failure being one of the main cause of attribution error suggests that the retriever component could be further improved. Third, off-the-shelf NLI models show promising performance at identifying generated sentences unsupported by evidence document, but fall behind human agreements. With our annotated new dataset as well as other related datasets [Liu et al., 2023b], one can investigate improving automatic attribution methods. Together, we present a comprehensive study of retrieval augmented models for the long-form question answering task, a challenging yet important problem in NLP. REFERENCES Bernd Bohnet, Vinh Q Tran, Pat Verga, Roei Aharoni, Daniel Andor, Livio Baldini Soares, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, et al. Attributed question answering: Evaluation and modeling for attributed large language models. *arXiv preprint arXiv:2212.08037*, 2022. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pp. 632–642, Lisbon, Portugal, September 2015. Association for Computational Linguistics. doi: 10.18653/v1/D15-1075. URL https://aclanthology.org/D15-1075. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Hung-Ting Chen, Michael J.Q. Zhang, and Eunsol Choi. Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence. In *Conference on Empirical Methods in Natural Language Processing*, 2022. URL https://api.semanticscholar.org/CorpusID:253107178. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. *arXiv preprint arXiv:2210.11416*, 2022. Alexander Fabbri, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. QAFactEval: Improved QA-based factual consistency evaluation for summarization. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pp. 2587–2601, Seattle, United States, July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.naacl-main.187. URL https://aclanthology.org/2022.naacl-main.187. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. Eli5: Long form question answering. *arXiv preprint arXiv:1907.09190*, 2019. Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In *Proceedings of the 2nd Workshop on Machine Reading for Question Answering*, pp. 1–13, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-5801. URL https://aclanthology.org/D19-5801. Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, et al. Attributed text generation via post-hoc research and revision. *arXiv preprint arXiv:2210.08726*, 2022. Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. RARR: Researching and revising what language models say, using language models. In *Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pp. 16477–16508, Toronto, Canada, July 2023a. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.910. Tianyu Gao, Xingcheng Yao, and Danqi Chen. SimCSE: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pp. 6894–6910, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.552. URL https://aclanthology.org/2021.emnlp-main.552. Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. Enabling large language models to generate text with citations. *arXiv preprint arXiv:2305.14627*, 2023b.
HoY24hOeVP
Firstly, one important motivation is the combination with different textual prompts. I guess that this combination is implemented in a single subspace (and if my understanding is incorrect, kindly correct me).
Efficient Personalized Text-to-Image Generation by Leveraging Textual Subspace Anonymous authors Paper under double-blind review Abstract Personalized text-to-image generation has attracted unprecedented attention in the recent few years due to its unique capability of generating highly-personalized images via using the input concept dataset and novel textual prompt. However, previous methods solely focus on the performance of the reconstruction task, degrading its ability to combine with different textual prompt. Besides, optimizing in the high-dimensional embedding space usually leads to unnecessary time-consuming training process and slow convergence. To address these issues, we propose an efficient method to explore the target embedding in a textual subspace, drawing inspiration from the self-expressiveness property. Additionally, we propose an efficient selection strategy for determining the basis vectors of the textual subspace. The experimental evaluations demonstrate that the learned embedding can not only faithfully reconstruct input image, but also significantly improves its alignment with novel input textual prompt. Furthermore, we observe that optimizing in the textual subspace leads to an significant improvement of the robustness to the initial word, relaxing the constraint that requires users to input the most relevant initial word. Our method opens the door to more efficient representation learning for personalized text-to-image generation. 1 Introduction An important human ability is to abstract multiple visible concepts and naturally integrate them with known visual content using a powerful imagination (Cohen et al., 2022; Ding et al., 2022; Gao et al., 2021; Kumar et al., 2022; Li et al., 2022; Skantze & Willemsen, 2022; Zhou et al., 2022). Recently, a method for rapid personalized generation using pre-trained text-to-image model has been attracting public attention (Gal et al., 2022; Kumari et al., 2022; Ruiz et al., 2022). It allows users to represent the input image as a “concept” by parameterizing a word embedding or fine-tuning the parameters of the pre-trained model and combining it with other texts. The idea of parameterizing a “concept” not only allows the model to reconstruct the training data faithfully, but also facilitates a large number of applications of personalized generation, such as text-guided synthesis (Rombach et al., 2022b), style transfer (Zhang et al., 2022), object composition (Liu et al., 2022), etc. As the use of personalized generation becomes more widespread, a number of issues have arisen that need to be addressed. The problems are two-fold: first, previous methods such as Textual Inversion (Gal et al., 2022) choose to optimize directly in high-dimensional embedding space, which leads to inefficient and time-consuming optimization process. Second, previous methods only target the reconstruction of input images, degrading the ability to combine the learned embedding with different textual prompt, which makes it difficult for users to use the input prompt to guide the pre-trained model for controllable generation. A natural idea is to optimize in a subspace with high text similarity\(^1\), so as to ensure that the learned embedding can be flexibly combined with textual prompt. Meanwhile, optimizing in the low-dimensional space can also improve training efficiency and speed up convergence. However, existing methods directly used gradient backpropagation to \(^1\) As stated in (Gal et al., 2022), high text similarity indicates that encoding the embedding into the pre-trained model is more likely to generate the corresponding images (e.g., input the embedding of the word “cat” will output a photo of cat), which is usually measured by the cosine similarity between the image and text features transformed by the CLIP model (Hessel et al., 2021). optimize the embedding (Gal et al., 2022), making it difficult to explicitly constrain the embedding to a specific subspace for each dataset. In order to bypass this difficulty, we drew inspiration from the self-expressiveness property\(^2\), which led us to the realization that any target embedding \(v\) can be reconstructed by the combination of other pre-trained embeddings \(v_1, v_2, \ldots, v_M\) in the vocabulary \(V\) provided by text-to-image models, where \(M < |V|\) and \(|V|\) is the number of elements in \(V\). Once such embeddings \(v_1, v_2, \ldots, v_M\) are obtained, we can construct a subspace \(S\) by spanning them, i.e., \(S = \text{span}(v_1, v_2, \ldots, v_M)\). Subsequently, we can conduct an efficient optimization in a low-dimensional space \(S\). Specifically, we explicitly select a number of semantically relevant embeddings from the vocabulary as \(v_1, v_2, \ldots, v_M\), and then we form a semantically meaningful subspace \(S\). To achieve semantically relevant embeddings, we introduce a rank-based selection strategy that uses the nearest neighbour algorithm. This strategy efficiently selects \(v_1, v_2, \ldots, v_M\) that are semantically close to the input concept, which allows the embeddings learned by our method to be naturally combined with other texts. To this end, we proposed a method BaTex, which can efficiently learn arbitrary embedding in a textual subspace\(^3\). Comparing with existing methods like such as Textual Inversion (Gal et al., 2022), the proposed BaTex do not require to search solutions in the entire high-dimensional embedding space, and thus it can improve training efficiency and speed up convergence. The schematic diagram is shown in Figure 1, where BaTex optimizes in a low-dimensional space (shown in red dashed area), which requires fewer training steps and achieves higher text similarity. We have experimentally demonstrated the expressiveness and efficiency of the proposed BaTex across multiple object and style concepts. BaTex completely avoids the overfitting issues observed in some previous methods (Kumari et al., 2022; Ruiz et al., 2022). In addition, it has the potential to integrate with more advanced and expressive large-scale text-to-image models such as Stable-Diffusion-2\(^4\). By efficiently exploring the embedding in a textual subspace, our method opens the door to further advancements in efficient personalized text-to-image generation. We summarize the main contributions as follows: - We propose a novel method BaTex for learning arbitrary embedding in a low-dimensional textual subspace, which is time-efficient and better preserves the text similarity of the learned embedding. --- \(^2\)Each data point in a union of subspaces can be efficiently reconstructed by a combination of other points in the dataset. (Elhamifar & Vidal, 2013) \(^3\)Subspace with high text similarity. \(^4\)https://huggingface.co/stabilityai/stable-diffusion-2 • On the theoretical side, we demonstrate that the selected basis embedding can be combined to produce arbitrary embedding. Also, we derive that the proposed method is equivalent to applying a projection matrix to the update of embedding of Textual Inversion. • We experimentally demonstrate that the learned embeddings can not only faithfully reconstruct the input image, but also significantly improve its alignment with different textual prompt. In addition, the robustness against initial word has been improved, relaxing the constraint that requires users to input the most relevant word as initialization. 2 BACKGROUND AND RELATED WORK In this section, we give the background and related work about deep generative models and its extensions. We first introduce the diffusion models, a class of deep generative models, then we briefly discuss text-to-image synthesis and personalized generation. 2.1 DIFFUSION MODELS The goal of deep generative models, such as Flow-based Models (Dinh et al., 2016; Du et al., 2022), VAE (Burgess et al., 2018), GAN (Goodfellow et al., 2020) and Diffusion Models (Dhariwal & Nichol, 2021), is to approximate an unknown data distribution by explicitly or implicitly parameterizing a model distribution using the training data. As a class of deep generative models which has been shown to produce high-quality images, Diffusion Models (Dhariwal & Nichol, 2021; Ho et al., 2020; Nichol & Dhariwal, 2021; Song et al., 2020) synthesize data via an iterative denoising process. As suggested in (Ho et al., 2020), a reweighted variational bound can be utilized as a simplified training objective and the sampling process aims to predict the corresponding noise added in forward process. The denoising objective is finally realized as a mean squared error: $$L_{\text{original}} = \mathbb{E}_{x_0,c,\epsilon,t}[\|\epsilon - \epsilon_\theta(\alpha_t x_0 + \sigma_t \epsilon, c)\|_2^2],$$ where $x_0$ are training data with conditions $c$, $\epsilon \sim \mathcal{U}(0, 1)$, $\epsilon \sim \mathcal{N}(0, I)$ is the gaussian noise sampled in the forward process, $\alpha_t, \sigma_t$ are pre-defined scalar functions of time step $t$, $\epsilon_\theta$ is the parameterized reverse process with trainable parameter $\theta$. Recently, an introduced class of Diffusion Models named Latent Diffusion Models (Rombach et al., 2022a) raises the interest of the community, which leverages a pre-trained autoencoder to map the images from pixel space to a more efficient latent space that significantly accelerates training and reduces the memory. Latent Diffusion Models consist of two core models. Firstly, a pre-trained autoencoder is extracted, which consists of an encoder $E$ and a decoder $D$. The encoder $E$ maps the images $x_0 \sim p(x)$ into a latent code $z_0$ in a low-dimensional latent space. The decoder $D$ learns to map it back to the pixel space, such that $D(E(x)) \approx x$. Secondly, a diffusion model is trained on this latent space. The denoising objective now becomes $$L_{\text{latent}} = \mathbb{E}_{z_0,c,\epsilon,t}[\|\epsilon - \epsilon_\theta(\alpha_t z_0 + \sigma_t \epsilon, c)\|_2^2],$$ where $z_0 = E(x_0)$ is the latent code encoded by the pre-trained encoder $E$. For conditional synthesis, in order to improve sample quality while reducing diversity, classifier guidance (Dhariwal & Nichol, 2021) is proposed to use gradients from a pre-trained model $p(c|z_t)$, where $z_t := \alpha_t z_0 + \sigma_t \epsilon$. Classifier-free guidance (Ho & Salimans, 2022) is an alternative approach that avoids this pre-trained model by instead jointly training a single diffusion model on conditional and unconditional objectives via randomly dropping $c$ during training with probability $1 - w$, where $w$ is the guidance scale offering a tradeoff between sample quality and diversity. The modified predictive model is shown as follows: $\hat{\epsilon}_\theta(z_t, c) = w \epsilon_\theta(z_t, c) + (1 - w) \epsilon_\theta(z_t, \phi)$, where $\phi = \epsilon_\theta(" ")$ is the embedding of a null text and $\epsilon_\theta$ is the pre-trained text encoder such as BERT (Devlin et al., 2018) and CLIP (Radford et al., 2021). 2.2 TEXT-TO-IMAGE SYNTHESIS Recent large-scale text-to-image models such as Stable-Diffusion (Rombach et al., 2022a), GLIDE (Nichol et al., 2021) and Imagen (Saharia et al., 2022) have demonstrated unprecedented semantic generation. We implement our method based on Stable-Diffusion, which is a publicly available 1.4 billion parameters text-to-image diffusion model pre-trained on the LAION-400M dataset (Schuhmann et al., 2021). Here \( c \) is the processed text condition. Typical text encoder models include three steps to process the input text. Firstly, a textual prompt is input by the users and split by a tokenizer to transform each word or sub-word into a token, which is an index in a pre-defined language vocabulary. Secondly, each token is mapped to a unique text embedding vector, which can be retrieved through an index-based lookup (Gal et al., 2022). The embedding vector is then concatenated and transformed by the CLIP text encoder to obtain the text condition \( c \). ### 2.3 Personalized Generation As the demand for personalized generation continues to grow, personalized generation has become a prominent factor in the field of machine learning, such as recommendation systems (Amat et al., 2018) and language models (Cattiau, 2022). Within the vision community, adapting models to a specific object or style is gradually becoming a target of interest. Users often wish to input personalized real images to parameterize a “concept” from them and combine it with large amounts of textual prompt to create new combination of images. A recent proposed method Textual Inversion (Gal et al., 2022) choose the text embedding space as the location of the “concept”. It intervenes in the embedding process and uses a learned embedding \( v \) to represent the concept, in essence “injecting” the concept into the language vocabulary. Specifically, it defines a placeholder string \( S \) (such as “A photo of *”) as the textual prompt, where “*” is the pseudo-word corresponding to the target embedding \( v \) it wishes to learn. The embedding matrix \( y \in \mathbb{R}^{N \times d} \) can be obtained by concatenating \( v \) with other frozen embeddings (such as the corresponding embeddings of “a”, “photo” and “of” in the example), where \( N \) is the number of words in the placeholder string\(^5\) and \( d \) is the dimension of the embedding space. The above process is defined as (Gal et al., 2022): \[ y \leftarrow \text{Combine}(S, v). \] The optimization goal is defined as: \[ \arg \min_{v} \mathbb{E}_{z_0, \epsilon, t}[\|\epsilon - \epsilon_\theta(\alpha_t z_0 + \sigma_t \epsilon, c_\theta(y))\|_2^2], \] where \( z_0, \epsilon \) and \( t \) are defined in Equation (1) and (2). Please note that \( y \) is a function of \( v \). Although Textual Inversion can extract a single concept formed by 3-5 images and reconstruct it faithfully, it cannot be combined with textual prompt flexibly since it solely considers the performance of the reconstruction task during optimization. Also, it searches the target embedding in the high-dimensional embedding space, which is time-consuming and difficult to converge. To address the issues above, we observe that the pre-trained embeddings in the vocabulary is expressive enough to represent any introduced concept. Therefore, we explicitly define a projection matrix to efficiently optimize the target embedding in a low-dimensional textual subspace, which speeds up convergence and better preserves the text similarity of the learned embedding. ### 3 The Proposed BaTex Method In this section, we introduce the proposed BaTex method. Following the definition of Stable Diffusion (Rombach et al., 2022a), \( E = \mathbb{R}^d \) is the word embedding space with dimension \( d \) and \( V \) is the word vocabulary defined by the CLIP text model (Radford et al., 2021). The words in vocabulary \( V \) corresponds to a set of pre-trained vectors \( \{v_i\}_{i=1}^{|V|} \), where \( |V| \) is the cardinality of set \( V \). #### 3.1 Optimization Problem We first state that any vector in the embedding space \( E \) can be represented by the embeddings in the vocabulary \( V \), as shown in the following theorem whose proof can be found in Appendix B. **Theorem 1** Any vector \( v \) in word embedding space \( E \) can be represented by a linear combination of the embeddings in vocabulary \( V \). \(^5\)In practice, the embedding matrix \( y \in \mathbb{R}^{N_{max} \times d} \) where \( N_{max} \) is the pre-defined maximum number of words in a sentence and other \( N_{max} - N \) words are filled with terminator. For clarity of expression, we only consider the words with practical meaning in the main text. Algorithm 1 Selection Strategy of Textual Subspace. **Input:** vocabulary $V$, initialization embedding $u$, dimension of the embedding space $d$, dimension of textual subspace $d_1$, vector distance $\mathcal{F}$ **Phase 1: reordering the embeddings using the vector distance** Calculate the distance vector $dist_V = (\mathcal{F}(u, v_1), \ldots, \mathcal{F}(u, v_{|V|}))^T$ Order the embeddings $(v_1, \ldots, v_{|V|}) \leftarrow \text{Order}((v_1, \ldots, v_{|V|}), dist_V)$ **Phase 2: rank-based selection strategy** repeat Select a proper number $M$, get the top $M$ embeddings: $v_{i_1}, \ldots, v_{i_M}$ Form the embedding matrix: $V_M \leftarrow [v_{i_1}, \ldots, v_{i_M}]$ Compute the rank of the embedding matrix: $r(V_M)$ until $r(V_M) \geq d_1$ **Output:** embeddings $v_{i_1}, \ldots, v_{i_M}$ As stated in Theorem 1, any vector in $\mathbb{E}$ can be represented by a linear combination of the embeddings in the vocabulary $V$. Now we define the weight vector $w = (w_1, \ldots, w_{|V|})^T$ with each component corresponding to an embedding in $V$, and the embedding $v = \sum_{i=1}^{|V|} w_i v_i$. Since the users input an initial embedding $u \in V$, we wish the start point in $\mathbb{E}$ to be the same as $u$ in our algorithm. Thus, we initialize the weights as: $$w^0 := (w_1^0, \ldots, w_{|V|}^0) = (0, \ldots, 0, 1)_{i=i_u}, 0, \ldots, 0)^T,$$ where $i_u$ denotes the index corresponding to $u$. Then the embedding $v$ to be learned can now be initialized as: $$v^0 = w_1^0 v_1 + \cdots + w_{i_u}^0 u + \cdots + w_{|V|}^0 v_{|V|}.$$ Subsequently, it can be combined with the placeholder string to form the embedding matrix $y$ as stated in Section 2.3. The reconstruction task can be formulated as the following optimization problem: $$\arg\min_{v \in \mathbb{E}} L_{\text{rec}} := \mathbb{E}_{z_0, \epsilon, t} [\|\epsilon - \epsilon_\theta(\alpha_t z_0 + \sigma_t \epsilon, c_\theta(y))\|^2_2].$$ where $z_0$, $\epsilon$, $t$, $\alpha_t$ and $\sigma_t$ are detailed in Equation (1) and (2), $c_\theta$ is the text encoder defined in Section 2.1, $y$ is a function of $v$. To solve problem (5), we iteratively update the weight vector $w$ by using gradient descent with initial point $w^0$, so that the embedding $v$ is also updated. While it is expressive enough to represent any concept in the embedding space $\mathbb{E}$, most weights and vectors are unnecessary since the rank $r(A_V) = d$ as stated in Theorem 1, where $A_V = [v_1, \ldots, v_{|V|}] \in \mathbb{R}^{d \times |V|}$. Thus, we only need at most $d$ vectors to optimize with $u$ included, which corresponds to selecting $d$ linearly-independent vectors $v_{i_1}, v_{i_2}, \ldots, u, \ldots, v_{i_d}$ from vocabulary $V$. ### 3.2 Textual Subspace As detailed in Section 3.1, any vector in the embedding space $\mathbb{E}$ can be obtained using $d$ vectors $v_{i_1}, v_{i_2}, \ldots, u, \ldots, v_{i_d}$. However, optimizing the embedding $v$ by solving problem (5) solely targets the reconstruction of input images, leading to low text similarity (Gal et al., 2022). Besides, solving problem (5) requires searching in the whole high-dimensional embedding space $\mathbb{E}$, which results in time-consuming training process and slow convergence. It is natural to construct a textual subspace with high text similarity, in which the searched embedding is able to capture the details of the input image. To this end, vectors with high semantic relevance to $u$ should be included. As suggested in (Goldberg & Levy, 2014; Le & Mikolov, 2014; Mikolov et al., 2013a;b; Rong, 2014), the vector distance (denoted by $\mathcal{F}$) between the word embeddings in $V$, such as dot product, cosine similarity and $L_2$ norm, can be employed as the semantic similarity of the corresponding words. Now, we are ready to give the distance vector $dist_V$ to calculate the distance between $u$ and any embedding in $V$, which is given by $$dist_V = (\mathcal{F}(u, v_1), \ldots, \mathcal{F}(u, v_{|V|}))^T.$$ Next, we re-order $dist_V$ and the top $M$ vectors are selected as the basis vectors, where at most $d_1 \leq M$ vectors are linearly-independent among them and $d_1$ is the dimension of the textual subspace. The choice of $d_1$ and $\mathcal{F}$ are further discussed in Section 5 and Appendix F. The details of selection strategy are presented in Algorithm 1. Algorithm 2 BaTex Input: image dataset \( D \), pre-trained diffusion network \( u_\theta \), reconstruction objective \( L_{rec} \), vector distance \( F \), dimension of textual subspace \( d_1 \), training iteration \( n \), weight decay \( \gamma \), vocabulary \( V \), initial weights \( \{w_i^0\}_{i=1}^{M} \), initial embedding \( u \), input prompt \( S \) Select \( M \) embeddings using Algorithm 1: \( v_{i_1}, \ldots, v_{i_M} \) Initialize weights of selected embeddings \( \{w_i\}_{i=1}^{M} \leftarrow \{w_i^0\}_{i=1}^{M} \) for \( i = 1 \) to \( n \) do Sample a mini-batch \( \{x\} \sim D \) Update embeddings \( \{w_i\}_{i=1}^{M} \leftarrow \nabla_{(w_1,\ldots,w_M)}L_{rec}(\{x\}, \{w_i\}_{i=1}^{M}, \gamma) \) end for Get trained weights of the candidate embeddings \( \{w_i^*\}_{i=1}^{M} \) Obtain learned embedding using linear combination \( v^* \leftarrow \sum_{i=1}^{M} w_i^* v_{i_M} \) Combine learned embeddings with input prompt \( y \leftarrow \text{Combine}(S, v^*) \) Get target images through pretrained diffusion model \( \hat{x} \leftarrow u_\theta(y) \) Output: target images: \( \hat{x} \) 3.3 Concept Generation Given the chosen embeddings \( v_{i_1}, \ldots, v_{i_M} \), once the corresponding learned weights \( \{w_i^*\}_{i=1}^{M} \) are obtained by \( \arg\min_{v \in S} L_{rec} \) with \( S = \text{span}(v_{i_1}, \ldots, v_{i_M}) \), the learned embedding \( v^* \) can be formed as: \[ v^* = w_1^* v_{i_1} + w_2^* v_{i_2} + \cdots + w_M^* v_{i_M}. \] Then it can be combined with any input textual prompt \( S \) as: \[ y \leftarrow \text{Combine}(S, v^*), \] where the Combine operator is defined in Section 2.3. Subsequently, the target images \( \hat{x} \) are generated using the pre-trained diffusion network \( u_\theta \): \[ \hat{x} \leftarrow u_\theta(y). \] Details of BaTex are shown in Algorithm 2. Finally, we derive that for single-step optimization scenario, the difference of the embedding update between Textual Inversion and BaTex corresponds to a matrix transformation with rank \( d_1 \), which is the dimension of the textual subspace. The formal theorem is presented as follows, and its proof is included in Appendix C. Theorem 2 For single-step optimization, let \( v_1^* = u + \Delta v_1 \) and \( v_2^* = u + \Delta v_2 \) be the updated embedding of Textual Inversion and BaTex respectively, where \( u \) is the initial embedding. Then there exists a matrix \( B_V \in \mathbb{R}^{d \times d_1} \) with rank \( d_1 \), such that \[ \Delta v_2 = B_V \Delta v_1, \] where \( d_1 \) is the dimension of the textual subspace (\( d_1 < d \)). It can be seen from Theorem 2 that BaTex actually defines a transformation from \( \mathbb{R}^d \) to \( \mathbb{R}^{d_1} \) using the selection strategy stated in Algorithm 1, which intuitively benefits for optimization process since \( B_V \) is formed by the pre-trained embeddings, showing that BaTex explicitly extracts more information from the vocabulary \( V \). 4 Experiments 4.1 Experimental Settings In this subsection, we present the experimental settings, and more details can be found in Appendix E. We compare the proposed BaTex with Textual Inversion (TI) (Gal et al., 2022), the original method for personalized generation which lies in the category of “Embedding Optimization”. To analyze the | Category | Method | Param Size | Training Steps | Training Time | High Image-Alignment | High Text-Alignment | Avoid Overfitting | |------------------|-------------------|------------|----------------|---------------|----------------------|---------------------|-------------------| | Model Optimization | Dreambooth | 860M | 800 | 12min | ✓ | | ✓ | | | Custom-Diffusion | 57.1M | 250 | 10min | ✓ | ✓ | ✓ | | Embedding Optimization | Textual Inversion | 768 | 3000 | 1h | ✓ | | ✓ | | | BaTex (Ours) | < 768 | 500 | 10min | ✓ | ✓ | ✓ | Table 1: A comparison of the abilities of different methods. We also compare with two “Model Optimization” methods, DreamBooth (DB) (Ruiz et al., 2022) and Custom Diffusion (CD) (Kumari et al., 2022). DB finetunes all the parameters in Diffusion Models, resulting in the ability of mimicking the appearance of subjects in a given reference set and synthesize novel renditions of them in different contexts. However, it finetunes a large amount of model parameters, which leads to overfitting (Ramasesh et al., 2022). CD compares the effect of model parameters and chooses to optimize the parameters in the cross-attention layers. While it provides an efficient method to finetune the model parameters, it requires to prepare a regularized dataset (extracted from LAION-400M dataset (Schuhmann et al., 2021)) to mitigate overfitting, which is time-consuming and hinders its scalability to on-site application. A detailed comparison of method ability can be seen in Table 1. 4.2 Qualitative Comparison We first show that learning in a textual subspace significantly improves the text similarity of learned embedding while retaining the ability to reconstruct the input image. The results of text-guided synthesis are shown in Figure 2. As can be seen, for complex input textual prompt with additional text conditions, our method completely captures the input concept and naturally combines it with known concepts. Additionally, We show the effectiveness of our method by composing two concepts together and introducing additional text conditions (shown in bolded text). The results are shown in Figure 3. It can be seen that BaTex not only allows for the lossless combination of two distinct concepts, but also faithfully generates the additional text conditions. | Category | Method | Metric | Cat [5] | Wooden-pot [4] | Gta5-artwork [14] | Anders-zorn [12] | Cute-game [8] | Mean | |-------------------|--------|--------|---------|----------------|------------------|-----------------|--------------|------| | Model Optimization| DB | Text | 0.74 (0.00) | 0.63 (0.01) | 0.72 (0.01) | 0.72 (0.01) | 0.62 (0.00) | 0.69 | | | | Image | 0.91 (0.01) | 0.88 (0.01) | 0.61 (0.01) | 0.74 (0.00) | 0.61 (0.01) | 0.75 | | CD | Text | 0.79 (0.00) | 0.71 (0.00) | 0.74 (0.01) | 0.74 (0.01) | 0.74 (0.00) | 0.74 | | | Image | 0.87 (0.00) | 0.75 (0.00) | 0.59 (0.01) | 0.60 (0.02) | 0.56 (0.00) | 0.68 | | Embedding Optimization| TI | Text | 0.62 (0.00) | 0.66 (0.01) | 0.78 (0.01) | 0.72 (0.01) | 0.72 (0.00) | 0.70 | | | Image | 0.89 (0.01) | 0.81 (0.00) | 0.67 (0.01) | 0.67 (0.01) | 0.68 (0.01) | 0.74 | | BaTex | Text | 0.76 (0.00) | 0.72 (0.01) | 0.80 (0.00) | 0.77 (0.00) | 0.77 (0.01) | 0.76 | | | Image | 0.88 (0.01) | 0.81 (0.01) | 0.66 (0.01) | 0.72 (0.00) | 0.66 (0.01) | 0.74 | Table 2: Quantitative comparison between BaTex and previous works. The numbers in square bracket and parenthesis are the number of image in the dataset and the standard deviation. “Text” and “Image” refer to text-image and image-image alignment scores respectively. Red and blue numbers indicate the best and second best results respectively. While our method achieves similar results to TI in terms of image reconstruction, we significantly outperform them in terms of text similarity, and even achieve results comparable to the methods of the “Model Optimization” category. The results are reported with standard deviation over five random seeds. Figure 3: Concept composition of multiple learned embeddings. Bolded texts are additional text conditions. 4.3 Quantitative Comparison The results of image-image and text-image alignment scores compared with previous works are shown in Table 2. As can be seen, when compared with TI by text-image alignment score, BaTex substantially outperforms it (0.76 to 0.70) while maintaining non-degrading image reconstruction effect (0.74 to 0.74). For “Model Optimization” category, BaTex is competitive in both metrics, while their methods perform poorly in one of them due to overfitting. Additional results can be found in Appendix F. 4.4 User Study Following Textual Inversion, we have conducted a human evaluation with two test phases of image-image and text-image alignments. We collected a total of 160 responses to each phase. The results are presented in Table 3, showing that the human evaluation results align with the CLIP scores. | Metric | DB | CD | TI | Ours | |------------------------|-----|-----|-----|------| | Image-to-image alignment | 63.8 | 48.1 | 57.5 | 67.5 | | Text-to-image alignment | 77.5 | 58.1 | 27.5 | 71.9 | Table 3: Human preference study. | Metric | M=96 | M = 192 | M = 384 | M = 576 | M = 672 | |------------------------|------|---------|---------|---------|---------| | Text-image alignment score | 0.77 | 0.78 | 0.80 | 0.80 | 0.79 | | Image-image alignment score | 0.59 | 0.61 | 0.64 | 0.66 | 0.66 | | Convergence steps | 150 | 100 | 400 | 500 | 1000 | Table 4: Results of text-image and image-image alignment scores and convergence steps of dataset Gta5-artwork with respect to the dimension of textual subspace. 5 DISCUSSION In this section, we discuss the effects of proposed BaTex. Since the dimension of the textual subspace highly affects the search space of the target embedding, we perform an ablation study on the dimension $M$ and training steps. We also analyze the robustness and flexibility of BaTex by replacing the initial word. The results can be found in Appendix F. The limitations and societal impact of BaTex are discussed in Appendix A and D respectively. The reproducibility statement is presented in Appendix G. Dimension of textual subspace The choice of $M$ is significant to our method since it affects the solution space of target embedding. Specifically, we compare the text-alignment and image-alignment scores by the following numbers: \{96, 192, 384, 576, 672\} (Since $d = 768$, we only compare $M$ values less than 768). We show the results of dataset Gta5-artwork with respect to the dimension $M$ in Table 4. As can be seen, choosing $M = 576$ leads to relatively better results. The reasons are two-fold. First, for textual subspace with excessive dimension, optimizing is inefficient and requires more convergence steps as shown in the column of “$M = 672$”. Second, For low-dimensional textual subspace, although it generally converges faster, it is difficult to reconstruct the input image as can be seen in the row “Image-image alignment score”. We also observe a slight decrease in the value of text-image alignment score as the dimension decreases, which can be explained by the fact that it might not include enough semantic-related embeddings as its basis vectors. Thus, we choose to set $M = 576$ although it is possible to finetune $M$ for each dataset. Training steps The convergence steps of dataset Gta5-artwork with respect to the dimension $M$ are shown in Table 4. We observe that when lowering the dimension, it significantly improves the convergence speed. We also notice an outlier for “$M = 96$”, which can be explained by its low image-image alignment score, making it difficult to converge. Thus, We recommend to train BaTex with $M = 576$ for 500 steps. 6 CONCLUSION We have proposed BaTex, a novel method for efficiently learning arbitrary concept in a textual subspace. Through a rank-based selection strategy, BaTex determines the textual subspace using the information from the vocabulary, which is time-efficient and better preserves the text similarity of the learned embedding. On the theoretical side, we demonstrate that the selected embeddings can be combined to produce arbitrary vectors in the embedding space and the proposed method is equivalent to applying a projection matrix to the update of embedding. We experimentally demonstrate the efficiency and robustness of the proposed BaTex. Future improvements include introducing sparse optimization algorithm to automatically choose the dimension of textual subspace, and combining with “Model Optimization” methods to improve its image-image alignment score. REFERENCES Yuval Alaluf, Elad Richardson, Gal Metzer, and Daniel Cohen-Or. A neural space-time representation for text-to-image personalization. arXiv preprint arXiv:2305.15391, 2023. Fernando Amat, Ashok Chandrashekar, Tony Jebara, and Justin Basilico. Artwork personalization at netflix. In Proceedings of the 12th ACM conference on recommender systems, pp. 487–488, 2018. Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in $\beta$-vae. arXiv preprint arXiv:1804.03599, 2018. Julie Cattiau. A communication tool for people with speech impairments, 2022. Niv Cohen, Rinon Gal, Eli A Meiron, Gal Chechik, and Yuval Atzmon. “this is my unicorn, fluffy”: Personalizing frozen vision-language representations. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XX, pp. 558–577. Springer, 2022. Riccardo Corvi, Davide Cozzolino, Giada Zingarini, Giovanni Poggi, Koki Nagano, and Luisa Verdoliva. On the detection of synthetic images generated by diffusion models. arXiv preprint arXiv:2211.00680, 2022. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780–8794, 2021. Yuxuan Ding, Lingqiao Liu, Chunna Tian, Jingyuan Yang, and Haoxuan Ding. Don’t stop learning: Towards continual learning for the clip model. arXiv preprint arXiv:2207.09248, 2022. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. Shian Du, Yihong Luo, Wei Chen, Jian Xu, and Delu Zeng. To-flow: Efficient continuous normalizing flows with temporal optimization adjoint with moving speed. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12570–12580, 2022. Ehsan Elhamifar and René Vidal. Sparse subspace clustering: Algorithm, theory, and applications. IEEE transactions on pattern analysis and machine intelligence, 35(11):2765–2781, 2013. Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618, 2022. Rinon Gal, Moab Arar, Yuval Atzmon, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. Encoder-based domain tuning for fast personalization of text-to-image models. ACM Transactions on Graphics (TOG), 42(4):1–13, 2023. Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. Clip-adapter: Better vision-language models with feature adapters. arXiv preprint arXiv:2110.04544, 2021. Yoav Goldberg and Omer Levy. word2vec explained: deriving mikolov et al.’s negative-sampling word-embedding method. arXiv preprint arXiv:1402.3722, 2014. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020. Ligong Han, Yinxiao Li, Han Zhang, Peyman Milanfar, Dimitris Metaxas, and Feng Yang. Svdiff: Compact parameter space for diffusion fine-tuning. arXiv preprint arXiv:2303.11305, 2023.
cJs4oE4m9Q
Figure 2 shows that the orthogonal projection improves the performance of anomaly detection. What is the fundamental reason? I suggest the authors provide further analysis as well as some references if possible.
DEEP ORTHOGONAL HYPERSPHERE COMPRESSION FOR ANOMALY DETECTION Yunhe Zhang\textsuperscript{1,2} Yan Sun\textsuperscript{1,3} Jinyu Cai\textsuperscript{4} Jicong Fan\textsuperscript{1,2,*} \textsuperscript{1}School of Data Science, The Chinese University of Hong Kong, Shenzhen, China \textsuperscript{2}Shenzhen Research Institute of Big Data, Shenzhen, China \textsuperscript{3}School of Computing, National University of Singapore, Singapore \textsuperscript{4}Institute of Data Science, National University of Singapore, Singapore zhangyhannie@gmail.com yansun@comp.nus.edu.sg jinyucail995@gmail.com fanjicong@cuhk.edu.cn ABSTRACT Many well-known and effective anomaly detection methods assume that a reasonable decision boundary has a hypersphere shape, which however is difficult to obtain in practice and is not sufficiently compact, especially when the data are in high-dimensional spaces. In this paper, we first propose a novel deep anomaly detection model that improves the original hypersphere learning through an orthogonal projection layer, which ensures that the training data distribution is consistent with the hypersphere hypothesis, thereby increasing the true positive rate and decreasing the false negative rate. Moreover, we propose a bi-hypersphere compression method to obtain a hyperspherical shell that yields a more compact decision region than a hyperball, which is demonstrated theoretically and numerically. The proposed methods are not confined to common datasets such as image and tabular data, but are also extended to a more challenging but promising scenario, graph-level anomaly detection, which learns graph representation with maximum mutual information between the substructure and global structure features while exploring orthogonal single- or bi-hypersphere anomaly decision boundaries. The numerical and visualization results on benchmark datasets demonstrate the superiority of our methods in comparison to many baselines and state-of-the-art methods. 1 INTRODUCTION Anomaly detection plays a crucial role in a variety of applications, including fraud detection in finance, fault detection in chemical engineering (Fan & Wang [2014]), medical diagnosis, and the identification of sudden natural disasters (Aggarwal [2017]). Significant research has been conducted on anomaly detection using both tabular and image data (Ruff et al. [2018], Fan & Chow [2020], Goyal et al. [2020], Chen et al. [2022], Liznerski et al. [2021], Sohn et al. [2021], Liznerski et al. [2021]). A common setting is to train a model solely on normal data to distinguish unusual patterns from abnormal ones, which is usually referred to as one-class classification (Schölkopf et al. [1999], Tax & Duin [2004], Ruff et al. [2018], Pang et al. [2021], Seliya et al. [2021]). For example, the support vector data description (SVDD) proposed by (Tax & Duin [2004]) obtains a spherically shaped boundary around a dataset, where data points falling outside the hypersphere will be detected as anomalous data. The deep SVDD proposed by (Ruff et al. [2018]) trains a neural network to transform the input data into a space in which normal data are distributed in a hyperspherical decision region. Regarding the concern that finite training normal data generating distribution may be incomplete or draw from many sets of categories, Kirchheim et al. (2022) proposed a supervised multi-class hypersphere anomaly detection method. Han et al. (2022) provided a review and comparison of many anomaly detection methods. Compared with common anomaly detection, there is relatively little work on graph-level data, despite the fact that graph anomaly detection has application scenarios in various problems, such as identifying abnormal communities in social networks, discriminating whether human-brain networks are healthy (Lanciano et al. [2020]), or detecting unusual protein structures in biological... experiments. The target of graph-level anomaly detection is to explore a regular group pattern and distinguish the abnormal manifestations of the group. However, graph data are inherently complex and rich in structural and relational information. This characteristic facilitates the learning of powerful graph-level representations with discriminative patterns in many supervised tasks (e.g., graph classification) but brings many obstacles to unsupervised learning. Graph kernels (Kriege et al., 2020) are useful for both supervised and unsupervised graph learning problems. For graph-level anomaly detection, graph kernels can be combined with one-class SVM (Schölkopf et al., 1999) or SVDD (Tax & Duin, 2004). This is a two-stage approach that cannot ensure that implicit features are sufficiently expressive for learning normal data patterns. Recently, researchers proposed several end-to-end graph-level anomaly detection methods (Ma et al., 2022; Zhao & Akoglu, 2021; Qiu et al., 2022). For example, Ma et al. (2022) proposed a global and local knowledge distillation method for graph-level anomaly detection. Zhao & Akoglu (2021) combined the deep SVDD objective function and graph isomorphism network to learn a hypersphere of normal samples. Although the hypersphere assumption is reasonable and practical, and has led to many successful algorithms (Tax & Duin, 2004; Ruff et al., 2018, 2020; Kirchheim et al., 2022; Zhao & Akoglu, 2021) for anomaly detection, it still exhibits the following three limitations: - First, minimizing the sum of squares of the difference between each data point and the center cannot guarantee that the learned decision boundary is a standard hypersphere. Instead, one may obtain a hyperellipsoid (see Figure 2) or other shapes that are inconsistent with the assumption, which will lower the detection accuracy. - The second is that in high-dimensional space the normal data enclosed by a hypersphere are all far away from the center (see Figure 3 and Proposition 2) with high probability. It means that there is no normal data around the center of the hypersphere; hence, the normality in the region is not supported, whereas anomalous data can still fall into the region. It’s related to the soap-bubble phenomenon of high-dimensional statistics (Vershynin, 2018). - Last but not least, in high-dimensional space, one hypersphere is not sufficiently compact. In other words, the distribution of normal data in the hypersphere is extremely sparse because of the high dimensionality and limited training data. A high sparsity increases the risk of detecting anomalous data as normal. To address these issues, we propose two anomaly detection methods. The first one, Deep Orthogonal Hypersphere Contraction (DOHSC), utilizes an orthogonal projection layer to render the decision region more hyperspherical and compact to reduce evaluation errors. The second one, Deep Orthogonal Bi-Hypersphere Compression (DO2HSC), aims to solve the problem of the soap-bubble phenomenon and incompactness. From a 2-dimensional view, DO2HSC limits the decision area (of normal data) to an interval enclosed by two co-centered hyperspheres, and similarly learns the orthogonality-projected representation. Accordingly, a new detection metric is proposed for DO2HSC. The framework of the methods mentioned above is shown in Figure 1. In addition, graph-level extensions of DOHSC and DO2HSC are conducted to explore a more challenging task, i.e., graph-level anomaly detection. In summary, our contributions are three-fold. - First, we present a hypersphere contraction algorithm for anomaly detection tasks with an orthogonal projection layer to promote training data distribution close to the standard hypersphere, thus avoiding inconsistencies between assessment criteria and actual conditions. - Second, we propose the deep orthogonal bi-hypersphere compression model to construct a decision region enclosed by two co-centered hyperspheres, which has theoretical supports and solves the problem of soap-bubble phenomenon and incompactness of the single-hypersphere assumption. - Finally, we extend our methods to graph-level anomaly detection and conduct abundant experiments to show the superiority of our methods over the state-of-the-art. 2 DEEP ORTHOGONAL HYPERSPHERE COMPRESSION 2.1 VANILLA MODEL Denote a data matrix by $X \in \mathbb{R}^{n \times d}$ with $n$ instances and $d$ features, we first construct an auto-encoder and utilize the latent representation $Z = f_{\mathcal{W}}^{\text{enc}}(X)$ to initialize a decision region’s center $c$ according to Deep SVDD (Ruff et al., 2018), i.e., $c = \frac{1}{n} \sum_{i=1}^{n} f_{\mathcal{W}}^{\text{enc}}(x_i)$, where $x_i$ denotes the transpose of the $i$-th row of $X$ and $f_{\mathcal{W}}^{\text{enc}}(\cdot)$ is an $L$-layer representation learning module with parameters $\mathcal{W} = \{W_l, b_l\}_{l=1}^{L}$. With this center, we expect to optimize the learned representation of normal data to be distributed as close to it as possible, so that the unexpected anomalous data falling out of this hypersphere would be detected. The Hypersphere Contraction optimization problem for anomaly detection is first formulated as follows: $$\min_{\mathcal{W}} \frac{1}{n} \sum_{i=1}^{n} \|f_{\mathcal{W}}^{\text{enc}}(x_i) - c\|^2 + \frac{\lambda}{2} \sum_{l=1}^{L} \|W_l\|_F^2,$$ where the regularization is to reduce over-fitting. 2.2 ORTHOGONAL PROJECTION LAYER Although the goal of Optimization (1) is to learn a hypersphere as the decision boundary, we find that it usually yields a hyperellipsoid or even more irregular shapes (please refer to Section I in the supplementary material). This phenomenon would lead to inaccuracies in the testing stage, because the evaluation was based on the hypersphere assumption. Figure 2 illustrates an intuitive example. In the left plot, the learned decision boundary (black ellipse) does not match the assumption (blue circle), which decreases the true-positive (TP) rate and increases the false-positive (FP) rate. Thus the detection precision, calculated as $\frac{TP}{TP+FP}$, decreases compared to the right plot. The inconsistency between the assumption and the actual solution stems from the following two points: 1) the learned features have different variances and 2) the learned features are correlated. Clearly, these two issues cannot be avoided by solely solving Optimization (1). To solve these issues, as shown in the right plot of Figure 2, we append an orthogonal projection layer to the feature layer, i.e., the output of $f_{\mathcal{W}}^{\text{enc}}$. Note that we pursue orthogonal features of latent representation rather than computing the projection onto the column or row space of $Z \in \mathbb{R}^{n \times k}$, which is equivalent to performing Principal Component Analysis (PCA) (Wold et al., 1987) and using standardized principal components. Our experiments also justify the necessity of this projection step and the standardization process, which will be discussed further in Appendix K. Specifically, the projection layer is formulated as $$\tilde{Z} = \text{Proj}_\Theta(Z) = ZW^*, \quad \text{subject to } \tilde{Z}^\top \tilde{Z} = I_k,$$ where $W^*$ is the orthogonal matrix obtained from $Z$. where $\Theta := \{W^* \in \mathbb{R}^{k \times k'}\}$ is the set of projection parameters, $I_{k'}$ denotes an identity matrix, and $k'$ is the projected dimension. To achieve (2), one may consider adding a regularization term $\frac{\alpha}{2} \| \tilde{Z}^\top \tilde{Z} - I_{k'} \|_F^2$ with large enough $\alpha$ to the objective, which is not very effective and will lead to one more tuning hyperparameter. Instead, we propose to achieve (2) via singular value decomposition: $$U \Lambda V^\top = Z, \quad W := V_{k'} \Lambda_{k'}^{-1}. \tag{3}$$ Assume that there are $b$ samples in one batch, $\Lambda = \text{diag}(\rho_1, \rho_2, ..., \rho_b)$ and $V$ are the diagonal matrix with singular values and right-singular matrix of $Z$, respectively. It is noteworthy that $V_{k'} := [v_1, ..., v_{k'}]$ denotes the first $k'$ right singular vectors, and $\Lambda_{k'} := \text{diag}(\rho_1, ..., \rho_{k'})$. In each forward propagation epoch, the original weight parameter is substituted into a new matrix $W^*$ in the subsequent loss computations. ### 2.3 Anomaly Detection Attaching with an orthogonal projection layer, the improved initialization of the center is rewritten in the following form $\tilde{c} = \frac{1}{n} \sum_{i=1}^{n} \tilde{z}_i$, which will be fixed until optimization is completed. The final objective function for anomaly detection tasks in a mini-batch would become $$\min_{\Theta, W} \frac{1}{b} \sum_{i=1}^{b} \| \tilde{z}_i - \tilde{c} \|^2 + \frac{\lambda}{2} \sum_{W \in W} \| W \|^2_F. \tag{4}$$ After the training stage, the decision boundary $\hat{r}$ will be fixed, which is calculated based on the $1 - \nu$ percentile of the training data distance distribution: $$\hat{r} = \arg \min_r P(D \leq r) \geq \nu \tag{5}$$ where $D := \{d_i\}_{i=1}^{N}$ follows a sampled distribution $P$, and $d_i = \| \tilde{z}_i - \tilde{c} \|$. Accordingly, the anomalous score of $i$-th instance is defined as follows: $$s_i = d_i^2 - \hat{r}^2 \tag{6}$$ where $s = (s_1, s_2, ..., s_n)$. It is evident that when the score is positive, the instance is identified as abnormal, and the opposite is considered normal. The detailed procedures are summarized in Algorithm 1 (see Appendix A), which is termed as DOHSC. DOHSC is easy to implement and can ensure that the actual decision boundary is close to a hypersphere. Our numerical results in Section 4 will show the effectiveness. ### 3 Deep Orthogonal Bi-Hypersphere Compression #### 3.1 Motivation and Theoretical Analysis As mentioned in the third paragraph of Section 1, the hypersphere assumption may encounter the soap-bubble phenomenon and incompactness. They can be succinctly summarized as - High-dimensional data enclosed by a hypersphere are far from the center naturally, which means the normality within a wide range of distance is not supported. - In high-dimensional space, the data distribution within a hypersphere is highly sparse, which leads to an incompact decision region and, hence, a heightened risk of detecting abnormal data as normal. In this section, we present a detailed analysis. Let the anomaly score be determined using $\| z - c \|$ where $c$ denotes the centroid. The original evaluation of anomaly detection compares the score with a threshold $\hat{r}$ determined by a certain quantile (e.g., 0.95). Specifically, if $\| z - c \| \geq \hat{r}$, $z$ is abnormal. This target promoted the location of most samples near the origin. However, empirical exploration has found that most samples are far away from their origin in a high-dimensional space. Taking Gaussian distributions as an example, the distributions would look like a soap-bubble[^1] which [^1]: https://www.inference.vc/high-dimensional-gaussian-distributions-are-soap-bubble/ means the high-dimensional normal data may be more likely to locate in the interval region of bi-hypersphere instead of a simple hypersphere. Vershynin (2018) stated that the typical set, where data has information closest to the expected entropy of the population, of a Gaussian is the thin shell within a distance from the origin, just like the circumstances shown in Figure 3. The higher the dimensionality of the data, the more sampled instances are from the center. We also supplement the anomaly detection simulation of high-dimensional Gaussian data in Appendix C to show the significant meaning of bi-hypersphere learning. This is formally proven by the following proposition (derived from Lemma 1 of Laurent & Massart (2000)): **Proposition 1.** Suppose \( z_1, z_2, \ldots, z_n \) are sampled from \( N(0, I_d) \) independently. Then, for any \( z_i \) and all \( t \geq 0 \), the following inequality holds. \[ P \left[ \|z_i\| \geq \sqrt{d - 2\sqrt{dt}} \right] \geq 1 - e^{-t}. \] The proposition shows that when the dimension is high, each \( z_i \) is outside the hypersphere of radius \( r' := \sqrt{d - 2\sqrt{dt}} \) with a probability of at least \( 1 - e^{-t} \). When \( r' \) is closer to \( \hat{r} \) (refer to equation 5), normal data are more likely to be away from the center (see Figure 3). Note that, in anomaly detection, \( \tilde{z}_i \) (e.g., the learned latent representation) is not necessarily an isotropic Gaussian. However, we obtain the following result. **Proposition 2.** Let \( z_i = \tilde{z}_i, i = 1, \ldots, N \) and let \( f : \mathbb{R}^k \to \mathbb{R}^k \) be an \( \eta \)-Lipschitz function such that \( s = f(z) \) are isotropic Gaussian \( N(\bar{c}, I_k) \). Let \( c \) be a predefined center of \( \{z_i\}_{i=1}^N \) and suppose \( \| \bar{c} - f(c) \| \leq \epsilon \). Then for any \( z_i \) and all \( t \geq 0 \), the following inequality holds: \[ P \left[ \|z_i - c\| \geq \eta^{-1} \left( \sqrt{k - 2\sqrt{kt}} - \epsilon \right) \right] \geq 1 - e^{-t}. \] The proposition (proved in Appendix D) indicates that most data (\( N' \)) satisfy \( \|z - c\| \geq r' := \eta^{-1} \left( \sqrt{k - 2\sqrt{kt}} - \epsilon \right) \) with a probability of approximately \( \binom{N'}{N}(1 - e^{-t})^{N'}e^{-t(N-N')} \), where \( r' \) is close to \( \hat{r} \). This means there is almost no normal training data within the range \([0, r']\), i.e. normality within the range is not supported by the normal training data. An intuitive example is: **Example 1.** Assume the hypersphere is centered at the origin. Consider a data point with all features very close or even equal to zero. This point is very different from the normal training data and should be abnormal data. However, according to the metric \( \| \tilde{z} - \bar{c} \| \), this point is still in the hypersphere and is finally detected as normal data. Given the implications of Proposition 2, we recognize that in high-dimensional spaces, traditional distance-to-center based anomaly scores (equation 5) may lose their reliability due to the concentration of measure phenomenon. Figure 4 shows a real example of abnormal data falling into a region close to the center of the hypersphere. In addition to the soap-bubble phenomenon, we claim that the data distribution in a high-dimensional sphere is very sparse when the number \( n \) of the training data is limited. This means that when \( n \) is not sufficiently large, there could be large empty holes or regions in which normality is not supported because of the randomness. It is not sufficient to treat data that fall into holes or regions as normal data. Intuitively, for example, the distribution of \( n \) random points in a 3-D sphere of radius \( r \) is much sparser than that in a 2-D circle of radius \( r \). More formally, in a hypersphere of radius \( r \) in \( k \)-dimensional space, the expected number of data points per unit volume is \( \varrho_k = \frac{n\Gamma(k/2+1)}{\pi^{k/2}} \), where \( \Gamma \) is Euler’s gamma function. When \( r \) is not too small, \( \varrho_k \) increases rapidly as \( k \) decreases. See below. **Example 2.** Suppose \( n = 1000, r = 5 \). Then \( \varrho_2 \approx 12.7, \varrho_5 \approx 0.06, \) and \( \varrho_{10} < 0.0001 \). We hope to construct a more compact decision region, one with a much larger \( \varrho_k \), without changing the feature dimensions. Figure 4: Illustration of inevitable flaws in DOHSC on both the training and testing data of COX2. Left: the $\ell_2$-norm distribution of 4-dimensional distances learned from the real dataset; Right: the pseudo-layout in two-dimensional space sketched by reference to the empirical distribution. 3.2 Architecture of DO2HSC To solve the issues discussed in the previous section, we propose an improved approach, DO2HSC, which sets the decision boundary as an interval region between two co-centered hyperspheres. This can narrow the scope of the decision area to induce normal data to fill as much of the entire interval area as possible. After the same representation learning stage, we first utilize the DOHSC model for a few epochs to initialize the large radius $r_{\text{max}}$ and small radius $r_{\text{min}}$ of the interval area according to the $1 - \nu$ percentile and $\nu$ of the sample distance distribution, respectively. The aforementioned descriptions can be mathematically denoted as follows: $$r_{\text{max}} = \arg\min_r P(D \leq r) \geq \nu, \quad r_{\text{min}} = \arg\min_r P(D \leq r) \geq 1 - \nu.$$ (7) After fixing $r_{\text{max}}$ and $r_{\text{min}}$, the objective function of DO2HSC is formulated as follows: $$\min_{\Theta, W} \frac{1}{b} \sum_{i=1}^{b} (\max\{d_i, r_{\text{max}}\} - \min\{d_i, r_{\text{min}}\}) + \frac{\lambda}{2} \sum_{W \in W} \|W\|_F^2.$$ (8) This decision loss has the lowest bound $r_{\text{max}} - r_{\text{min}}$. In addition, the evaluation standard of the test data must also be changed based on this interval structure. Specifically, all instances located in the inner hypersphere and outside the outer hypersphere should be identified as anomalous individuals; only those located in the interval area should be regarded as normal data. We reset a new score function to award the positive samples beyond $[r_{\text{min}}, r_{\text{max}}]$ while punishing the negative samples within this range. Accordingly, the distinctive scores are calculated by $$s_i = (d_i - r_{\text{max}}) \cdot (d_i - r_{\text{min}}),$$ (9) where $i \in \{1, \ldots, n\}$. In this manner, we can also effectively identify a sample’s abnormality based on its score. In general, an improved deep anomaly detection algorithm changes the decision boundary and makes the normal area more compact. Furthermore, a new practical evaluation was proposed to adapt to the improved detection method. Finally, we summarize the detailed optimization procedures in Algorithm 2 (see Appendix A). The following proposition justifies the superiority of bi-hypersphere compression over single-hypersphere contraction from another perspective: **Proposition 3.** Suppose the number of normal training data is $n$, the radius of the hypersphere given by DOHSC is $r_{\text{max}}$, and the radii of the hyperspheres given by DO2HSC are $r_{\text{max}}$ and $r_{\text{min}}$ respectively. Without loss of generality, assume that all the training data are included in the learned decision regions. The ratio between the support densities of the decision regions given by DO2HSC and DOHSC is $\kappa = \frac{1}{1 - (r_{\text{min}}/r_{\text{max}})^k}$. In the proposition (proved in Appendix E), density is defined as the number of normal data in unit volume. A higher density indicates a higher confidence in treating a data point falling into the decision region as normal data, or treating a data point falling outside the decision region as anomalous data. Because $\kappa > 1$, the DO2HSC provides a more reliable decision region than the DOHSC. The advantage of the DO2HSC over the DOHSC is more significant when $k$ is smaller or $r_{\text{min}}$ is closer to $r_{\text{max}}$. Here are some examples. Example 3. Suppose \( k = 50 \). When \( r_{\text{min}}/r_{\text{max}} = 0.9 \), \( \kappa \approx 1.01 \). When \( r_{\text{min}}/r_{\text{max}} = 0.99 \), \( \kappa \approx 2.5 \). Suppose \( k = 10 \). When \( r_{\text{min}}/r_{\text{max}} = 0.9 \), \( \kappa \approx 1.5 \). When \( r_{\text{min}}/r_{\text{max}} = 0.99 \), \( \kappa \approx 10.5 \). ### 3.3 Generalization to Graph-Level Anomaly Detection Given a set of graphs \( G = \{G_1, ..., G_N\} \) with \( N \) samples, the proposed model aims to learn a \( k \)-dimensional representation and then set a soft boundary accordingly. In this paper, the Graph Isomorphism Network (GIN) [Xu et al., 2019] is employed to obtain the graph representation in three stages: first, input the graph data and integrate neighbors of the current node (AGGREGATE); second, combine neighbor and current node features (CONCAT); and finally, all node information (READOUT) is integrated into one global representation. Mathematically, the \( i \)-th node features of \( l \)-th layer and the global features of its affiliated \( j \)-th graph are denoted by \[ z^{(l)}_i = \text{CONCAT}(\{z^{(l)}_i\}_{l=1}), \quad Z_\Phi(G_j) = \text{READOUT}(\{z^{(l)}_i\}_{l=1}^{|G_j|}), \] where \( z^{(l)}_i \in \mathbb{R}^{1 \times k} \) and \( Z_\Phi(G_j) \in \mathbb{R}^{1 \times k} \). To integrate the contained information and enhance the differentiation between node- and global-level representations, we append additional fully connected layers denoted by the forms \( M_Y(\cdot) \) and \( T_\Psi(\cdot) \), respectively, where \( Y \) and \( \Psi \) are the parameters of the added layers. So the integrated node-level and graph-level representations are \[ h^{(l)}_{\Phi,Y} := M_Y(z^{(l)}_i); \quad H_{\Phi,\Psi}(G_j) := T_\Psi(Z_\Phi(G_j)). \] To better capture the local information, we utilize the batch optimization property of neural networks to maximize the mutual information (MI) between local and global representations in each batch \( G \subseteq G \), which is defined by [Sun et al., 2020] as follows: \[ \hat{\Phi}, \hat{\Psi}, \hat{Y} = \arg\max_{\Phi, \Psi, Y} I_{\Phi, \Psi, Y}(h_{\Phi,Y}, H_{\Phi,\Psi}(G)). \] Specifically, the mutual information estimator \( I_{\Phi, \Psi, Y} \) follows the Jensen-Shannon MI estimator [Nowozin et al., 2016] with a positive-negative sampling method, as follows: \[ I_{\Phi, \Psi, Y}(h_{\Phi,Y}, H_{\Phi,\Psi}(G)) := \sum_{G_j \in G} \frac{1}{|G_j|} \sum_{u \in G_j} I_{\Phi, \Psi, Y}(h^{(l)}_{\Phi,Y}(G_j), H_{\Phi,\Psi}(G)) \] \[ = \sum_{G_j \in G} \frac{1}{|G_j|} \sum_{u \in G_j} \left[ E(-\sigma(-h^{(l)}_{\Phi,Y}(x^+) \times H_{\Phi,\Psi}(x))) - E(\sigma(h^{(l)}_{\Phi,Y}(x^-) \times H_{\Phi,\Psi}(x))) \right], \] where \( \sigma(z) = \log(1 + e^z) \). For \( x \) as an input sample graph, we calculate the expected mutual information using its positive samples \( x^+ \) and negative samples \( x^- \), which are generated from the distribution across all graphs in a subset. Given that \( G = (V_G, E_G) \) and the node set \( V_G = \{v_i\}_{i=1}^{|G|} \), the positive and negative samples are divided in this manner: \( x^+ = x_{ij} \) if \( v_i \in G_j \) otherwise, \( x^+ = 0 \). Additionally, \( x^- \) produces the opposite result for each of the above conditions. Thus, a data-enclosing decision boundary is required for our anomaly detection task. Let \( \tilde{H}_{\Phi,\Psi,\Theta}(G) = \text{Proj}_\Theta(H_{\Phi,\Psi}(G)) \), the center of this decision boundary should be initialized through \[ \tilde{c} = \frac{1}{N} \sum_{i=1}^N \tilde{H}_{\Phi,\Psi,\Theta}(G_i). \] Collectively, the weight parameters of \( \Phi, \Psi \) and \( Y \) are \( Q := \Phi \cup \Psi \cup Y \), and let \( R(Q) = \sum_{Q \in Q} \|Q\|^2_F \), we formulate the objective function of the graph-level DOHSC as \[ \min_{\Theta, \Phi, \Psi, Y} \frac{1}{|G|} \sum_{i=1}^{|G|} \| \tilde{H}_{\Phi,\Psi,\Theta}(G_i) - \tilde{c} \|^2 - \lambda \sum_{G \in G} I_{\Phi, \Psi, Y}(h_{\Phi,Y}, \tilde{H}_{\Phi,\Psi,\Theta}(G)) + \frac{\mu}{2} R(Q), \] where \( |G| \) denotes the number of graphs in batch \( G \) and \( \lambda \) is a trade-off factor, the third term is a network weight decay regularizer with the hyperparameter \( \mu \). Correspondingly, the objective function of graph-level DO2HSC is \[ \min_{\Theta, \Phi, \Psi, Y} \frac{1}{|G|} \sum_{i=1}^{|G|} (\max\{d_i, r_{\text{max}}\} - \min\{d_i, r_{\text{min}}\}) - \lambda \sum_{G \in G} I_{\Phi, \Psi, Y}(h_{\Phi,Y}, \tilde{H}_{\Phi,\Psi,\Theta}(G)) + \frac{\mu}{2} R(Q). \] 4 Numerical Results 4.1 Experiments on Image Data Datasets: Two image datasets (Fashion-MNIST, CIFAR-10) are chosen to conduct this experiment. Please refer to the detailed statistic descriptions in Appendix F. Baselines: We followed the settings in (Ruff et al., 2018) and utilized the Area Under Operating Characteristic Curve (AUC) of several state-of-the-art anomaly detection algorithms, including Deep SVDD (Ruff et al., 2018), OCGAN (Perera et al., 2019), HRN-L2 and HRN (Hu et al., 2020), PLAD (Cai & Fan, 2022), and DROCC (Goyal et al., 2020). All SOTAs’ results are given according to their officially reported results or are reproduced by official codes. Considering that there is not much room for performance improvement on Fashion-MNIST, we only reproduced the results of recent or most relative algorithms, which contains Deep SVDD (Ruff et al., 2018), and DROCC (Goyal et al., 2020). The network architecture of Deep SVDD is set the same as ours for fairness. Results: The experimental results are listed in Table 1. On CIFAR-10, both DOHSC and DO2HSC surpassed SOTAs, especially for Dog and Frog. Second, DO2HSC obtained better results compared with DOHSC, which further verifies the effectiveness of bi-hypersphere anomaly detection and fully demonstrates its applicability to image data. It is also worth mentioning that Deep SVDD plays an important baseline role relative to DOHSC, and DOHSC outperforms it by a large margin in all classes. This illustrates the significant meaning of the proposed orthogonal projection method is constructive. The result of Fashion-MNIST is in Appendix G. | Normal Class | Airplane | Auto | Bird | Cat | Deer | Dog | Frog | Horse | Ship | Truck | |--------------|----------|------|------|-----|------|-----|------|-------|------|-------| | Deep SVDD | 61.7 | 65.9 | 50.8 | 59.1| 60.9 | 65.7| 67.7 | 67.3 | 75.9 | 73.1 | | OCGAN | 75.7 | 53.1 | 64.0 | 62.0| 72.3 | 62.0| 72.3 | 57.5 | 82.0 | 55.4 | | DROCC* | 82.1 | 64.8 | 69.2 | 64.4| 72.8 | 66.5| 68.6 | 67.5 | 79.3 | 60.6 | | HRN-L2 | 80.6 | 48.2 | 64.9 | 57.4| 73.3 | 61.0| 74.1 | 55.5 | 79.9 | 71.6 | | HRN | 77.3 | 69.9 | 60.6 | 64.4| 72.8 | 67.4| 77.4 | 64.7 | 82.5 | 77.3 | | PLAD | 85.4 | 80.8 | 68.8 | 64.2| 71.6 | 67.4| 73.5 | 60.6 | 80.5 | | | DOHSC | 80.3 | 81.0 | 70.4 | 68.0| 72.1 | 72.4| 83.1 | 74.1 | 83.5 | 81.1 | | | (0.0) | (0.0)| (1.9)| (1.8)| (0.0)| (2.1)| (0.0) | (0.4) | (0.7)| (0.7) | | DO2HSC | 81.3 | 82.7 | 71.3 | 71.2| 72.9 | 72.8| 83.0 | 75.5 | 84.4 | 82.0 | | | (0.2) | (0.3)| (0.4)| (1.3)| (2.1)| (0.2)| (0.8) | (0.4) | (0.5)| (0.9) | Table 1: Average AUCs (%) in one-class anomaly detection on CIFAR-10. * denotes we run the official released code to obtain the results, and the top two results are marked in bold. 4.2 Experiments on Tabular Data Datasets: Here, we use two tabular datasets (Thyroid, Arrhythmia), and we followed the data split settings in (Zong et al., 2018). Results: The F1-scores of our methods and six baselines are reported in Table 2. A significant margin was observed between the baselines and ours, especially the results of Thyroid. Despite the challenge posed by the small sample size of the Arrhythmia data, DO2HSC still outperforms PLAD by a margin of 3%. Similarly, the orthogonal projection of DOHSC successfully standardized the results of Deep SVDD. | Method | Thyroid | Arrhythmia | |-----------------|---------|------------| | OCSVM (Schölkopf et al., 1999) | 0.56 ± 0.01 | 0.64 ± 0.01 | | Deep SVDD (Ruff et al., 2018) | 0.73 ± 0.00 | 0.54 ± 0.01 | | LOF (Breunig et al., 2000) | 0.54 ± 0.01 | 0.51 ± 0.01 | | GOAD (Bergman & Hoshen, 2020) | 0.75 ± 0.01 | 0.52 ± 0.02 | | DROCC (Goyal et al., 2020) | 0.78 ± 0.03 | 0.69 ± 0.02 | | PLAD (Cai & Fan, 2022) | 0.77 ± 0.01 | 0.71 ± 0.02 | | DOHSC | 0.92 ± 0.01 | 0.70 ± 0.03 | | DO2HSC | 0.98 ± 0.59 | 0.74 ± 0.02 | Table 2: Average F1-scores with the standard deviation in one-class anomaly detection on two tabular datasets. The best two results are marked in bold. 4.3 Experiments on Graph Data Datasets: We further evaluate our models on six real-world graph datasets (COLLAB, COX2, ER_MD, MUTAG, DD and IMDB-Binary). Our experiments followed the standard one-class settings and data-split method in a previous work (Zhao & Akoglu, 2021; Qiu et al., 2022). Baselines: We compare our methods with the following methods, including four graph kernels combined with OCSVM and four state-of-the-art baselines: RW (Gärtner et al., 2003; Kashima et al., 2003), SP (Borgwardt & Kriegel, 2005), WL (Shervashidze et al., 2011), and NH (Hido & Figure 5: Distance Histograms on ER_MD. Figure 6: 3-D plots of DO2HSC on MUTAG. Table 3: Average AUCs with standard deviation (10 trials) of different graph-level anomaly detection algorithms. ‘DSVDD’ stands for ‘Deep SVDD’. We assess models by regarding every data class as normal data, respectively. The best two results are highlighted in **bold** and ‘-’ means out of memory. | | COLLAB | MUTAG | ER_MD | |------------------|--------|-------|-------| | **NH+OCSVM** | 0.5910 ± 0.0000 | 0.8397 ± 0.0000 | 0.7996 ± 0.0000 | | WL+OCSVM | 0.5122 ± 0.0000 | 0.8054 ± 0.0000 | 0.6509 ± 0.0000 | | NH+OCSVM | 0.5976 ± 0.0000 | 0.8054 ± 0.0000 | 0.6414 ± 0.0000 | | RW+OCSVM | - | - | 0.7959 ± 0.0274 | | OCGIN | 0.4217 ± 0.0696 | 0.7585 ± 0.2035 | 0.7490 ± 0.0857 | | infoGraph+DSVDD | 0.5663 ± 0.0097 | 0.7906 ± 0.0986 | 0.6062 ± 0.0079 | | GLocalKD | 0.6538 ± 0.0003 | 0.4330 ± 0.0016 | 0.4792 ± 0.0004 | | OCGTL | 0.6504 ± 0.0433 | 0.8098 ± 0.0239 | 0.4029 ± 0.0541 | | **DOHSC** | 0.9185 ± 0.0455 | 0.9755 ± 0.0030 | 0.8826 ± 0.0250 | | **DO2HSC** | 0.9390 ± 0.0025 | 0.9836 ± 0.0115 | 0.8835 ± 0.0118 | Kashima [2009], OCGIN [Zhao & Akoglu 2021], infoGraph+Deep SVDD [Sun et al. 2020] [Ruff et al. 2018], GLocalKD [Ma et al., 2022] and OCGTL [Qiu et al. 2022]. Results: Table 3 shows the comparable results of graph-level anomaly detection. 1) The proposed methods achieved the best AUC values compared to the other algorithms on all datasets. Both outperform the other state-of-the-art baselines. 2) DO2HSC is obviously more effective than DOHSC, especially since we observed that there exists a large improvement (exceeding 20%) in Class 1 of ER_MD between DOHSC and DO2HSC. A distance distribution visualization is provided to show their differences in Figure 5. Owing to length limitations, please refer to Appendix H for the remaining results. 3) The anomaly detection visualization results of DO2HSC displayed in Figure 6 also demonstrate excellent performance. We drew them by setting the projection dimension to 3, and please refer to Appendix I for the results of different perspectives. 4.4 More Results and Analysis We provide the time and space complexity analysis in Appendix B. Also, the ablation study (including orthogonal projection, mutual information maximization, etc.), parameter sensitivity (e.g., different percentile settings), robustness analysis, and more visualization results are shown in Appendices J and I respectively. 5 Conclusion This paper proposes two novel end-to-end AD methods, DOHSC and DO2HSC, that mitigate the possible shortcomings of hypersphere boundary learning by applying an orthogonal projection for global representation. Furthermore, DO2HSC projects normal data between the interval areas of two co-centered hyperspheres to significantly alleviate the soap-bubble issue and the incompactness of a single hypersphere. We also extended DOHSC and DO2HSC to graph-level anomaly detection, which combines the effectiveness of mutual information between the node level and global features to learn graph representation and the power of hypersphere compression. The comprehensive experimental results strongly demonstrate the superiority of the DOHSC and DO2HSC on multifarious datasets. One limitation of this work is that we did not consider cases in which the training data consisted of multiple classes of normal data, which is beyond the scope of this study. Our source code is available at https://github.com/wownice333/DOHSC-DO2HSC. ACKNOWLEDGEMENTS This work was supported by the National Natural Science Foundation of China under Grant No. 62376236, the General Program JCYJ20210324130208022 of Shenzhen Fundamental Research, the research funding T00120210002 of Shenzhen Research Institute of Big Data, and the funding UDF01001770 of The Chinese University of Hong Kong, Shenzhen. REFERENCES Charu C Aggarwal. An introduction to outlier analysis. 2017. Liron Bergman and Yedid Hoshen. Classification-based anomaly detection for general data. In Proceedings of the 8th International Conference on Learning Representations, 2020. Karsten M Borgwardt and Hans-Peter Kriegel. Shortest-path kernels on graphs. In Proceedings of the Fifth IEEE International Conference on Data Mining, pp. 8–pp, 2005. Markus M. Breunig, Hans-Peter Kriegel, Raymond T. Ng, and Jörg Sander. LOF: identifying density-based local outliers. In Proceedings of the ACM SIGMOD International Conference on Management of Data, pp. 93–104, 2000. Jinyu Cai and Jicong Fan. Perturbation learning based anomaly detection. In Advances in Neural Information Processing Systems, 2022. Yuanhong Chen, Yu Tian, Guansong Pang, and Gustavo Carneiro. Deep one-class classification via interpolated gaussian descriptor. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 383–392, 2022. Jicong Fan and Tommy W. S. Chow. Exactly robust kernel principal component analysis. IEEE Transactions on Neural Networks and Learning Systems, 31(3):749–761, 2020. Jicong Fan and Youqing Wang. Fault detection and diagnosis of non-linear non-gaussian dynamic processes using kernel dynamic independent component analysis. Information Sciences, 259: 369–379, 2014. Thomas Gärtner, Peter Flach, and Stefan Wrobel. On graph kernels: Hardness results and efficient alternatives. In Learning Theory and Kernel Machines, pp. 129–143, 2003. Sachin Goyal, Aditi Raghunathan, Moksh Jain, Harsha Vardhan Simhadri, and Prateek Jain. DROCC: deep robust one-class classification. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 3711–3721, 2020. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. Songqiao Han, Xiyang Hu, Haiiliang Huang, Minqi Jiang, and Yue Zhao. Adbench: Anomaly detection benchmark. Advances in Neural Information Processing Systems, 35:32142–32159, 2022. Shohei Hido and Hisashi Kashima. A linear-time graph kernel. In Ninth IEEE International Conference on Data Mining, pp. 179–188, 2009. Wenpeng Hu, Mengyu Wang, Qi Qin, Jinwen Ma, and Bing Liu. Hrn: A holistic approach to one class learning. Advances in Neural Information Processing Systems, 33:19111–19124, 2020. Hisashi Kashima, Koji Tsuda, and Akihiro Inokuchi. Marginalized kernels between labeled graphs. In Proceedings of the 20th International Conference on Machine Learning, pp. 321–328, 2003. Konstantin Kirchheim, Marco Filax, and Frank Ortmeier. Multi-class hypersphere anomaly detection. In Proceedings of the 26th International Conference on Pattern Recognition, pp. 2636–2642. IEEE, 2022.
4Hv5DLTJLF
A key technical challenge in prior work on personalized FL has been to address partial client participation. The paper does not tackle this question and assumes full participation of clients for the analysis (and even the experiments, as far as I can tell)
Consensus Optimization at Representation: Improving Personalized Federated Learning via Data-Centric Regularization Anonymous authors Paper under double-blind review Abstract Federated learning is a large scale machine learning training paradigm where data is distributed across clients, and can be highly heterogeneous from one client to another. To ensure personalization in client models, and at the same time to ensure that the local models have enough commonality (i.e., prevent “client-drift”), it has been recently proposed to cast the federated learning problem as a consensus optimization problem, where local models are trained on local data, but are forced to be similar via a regularization term. In this paper we propose an improved federated learning algorithm, where we ensure consensus optimization at the representation part of each local client, and not on whole local models. This algorithm naturally takes into account that today’s deep networks are often partitioned into a feature extraction part (representation) and a prediction part. Our algorithm ensures greater flexibility compared to previous works on exact shared representation in highly heterogeneous settings, as it has been seen that the representation part can differ substantially with data distribution. Our method is quite stable to noise, and can be made differentially private with strong privacy guarantee without much loss of accuracy. We provide a complete convergence analysis of our algorithm under general nonconvex loss functions, and validate its good performance experimentally in standard datasets. 1 Introduction Federated learning (FL) has attracted much attention from the machine learning community recently due to rapid development of distributed intelligent devices and the demand of data privacy protection in large scale learning models. A typical FL framework is a machine learning training paradigm that includes a central server to aggregate the local information from participating clients to update a global model. The local data of each client should not be shared with other clients and should ideally be kept private up to certain degree also from the server [Konečný et al. (2016); McMahan et al. (2017); Kairouz & McMahan (2021)]. With $M$ clients, a standard FL algorithm usually tries to solve the following optimization problem: $$\min_{\omega} \frac{1}{M} \sum_{i=1}^{M} f_i(\omega)$$ where $\omega$ is a global model updated at the server, $f_i(\omega)$ is the local objective function at $i$-th client (the empirical risk functions at each of the client evaluated at their respective data samples). At each iteration, a local (stochastic) gradient or the entire local model is sent to the server for global model update. However, in the context of FL, the data distribution across different clients are usually highly non-identical and heterogeneous. Thus in many practical applications, a single global model is not sufficient to satisfy the requirements of all the clients. To tackle this issue, many personalized FL methods have been proposed to allow each client to maintain a local model. A popular formulation of the problem is to use the concept of consensus optimization [Smith et al. (2017); T Dinh et al. (2020); Li et al. (2021)], that replaces the optimization problem of eq. (1) with the following: $$\min_{\omega_0, \{\omega_i\}_{i=1}^{M}} \frac{1}{M} \sum_{i=1}^{M} f_i(\omega_i) + \frac{\lambda}{2} \| \omega_i - \omega_0 \|^2$$ where $\omega_0$ is the global model maintained at the server, $\omega_i$ is an unique local model at $i$-th client, and $\lambda$ is a hyper-parameter to balance the local training and forced consensus. The local models are not required to be the exactly same but the regularization forces them to be close to each other, and the parameter $\lambda$ provides a flexibility to fit local data distribution. Recent success of centralized multi-task learning is based on the realization that different tasks have shared common representation [Bengio et al., (2013); Collins et al., (2021)]. Inspired by this observation, several studies have tried to exploit shared representation in personalized federated learning to achieve better local performance [Arivazhagan et al., (2019); Collins et al., (2021); Pillutla et al., (2022)]. In this setting, at a high level, the local prediction model at each client is divided into two parts, including a representation part common to all clients. This motivates our first question: **Q1:** Can we force the consensus (cf. eq. 2) on the representation level, not the whole model level? Note that, a regularization at the representation part will include less number of variables, and therefore potentially is less expensive (e.g., in taking gradients) than a constraint on entire model. Indeed, in modern machine learning tasks, the model is usually a deep neural network consisting of a feature extractor and a prediction head. In the personalized FL works mentioned above, the deep neural network model is partitioned into a feature extractor $u$ and prediction head $v$. They consider the following optimization problem across different clients: $$\min_{u,\{v_i\}_{i=1}^M} \frac{1}{M} \sum_{i=1}^{M} f_i(u,v_i)$$ (3) where $u$ is a global feature extractor mapping inputs to a low dimensional space, $v_i$ is the local prediction head at $i$-th client. The server only maintains the global feature extractor $u$, not the whole model, and broadcasts it to all the clients at each communication round. The global extractor is trained via a method similar to FedAvg, a popular federated learning method [Li et al., (2020b)], and local prediction heads are trained only locally. Each client will generate the same representation for the same input. This method decouples the representation part and prediction part and obtains better performance on heterogeneous data [Collins et al., (2021); Pillutla et al., (2022)]. However, as we will show in the next section, for different data distributions even the feature extractors can be different across clients. Although the prediction head has the largest difference between different clients, the differences in previous layers also exist [Li et al., (2023)]. This motivates our second question: **Q2:** Can we further allow the feature extractor in one client to be different from others while still learning information on shared representations from other clients? Motivated by the two questions, in this work we propose a consensus optimization problem at the representation level as: $$\min_{\{u_i\}_{i=0}^{M},\{v_i\}_{i=1}^M} \frac{1}{M} \sum_{i=1}^{M} f_i(u_i,v_i) + \frac{\lambda}{2} \frac{1}{M} \sum_{i=1}^{M} H_i(u_i,u_0)$$ (4) where $u_i$ and $v_i$ are the local feature extractor and local prediction head at $i$-th client, $i = 1,...,M$, respectively, $u_0$ is a global feature extractor maintained at the server. $H_i(u_i,u_0)$ is a regularization term to force the representation of the $i$th client $u_i$, which is defined on local dataset, and $u_0$ to be close. In this formulation the local feature extractors are no longer exactly same for each client, which provides more flexibility to fit highly heterogeneous data. The local models are almost trained locally except that their intermediate representations are forced to be close to each other. The local parameters are not covered by the global parameters in the training process, retaining more local knowledge. Fig. 1 displays an overview of our proposed framework. One point that we would like to stress: our regularization of the representation part is data-driven (compare with the regularization term in eq. 2). For this formulation of the problem, we propose a new federated learning algorithm (detailed in Algorithm 4.1) and outlined in Fig. 1 based on distributed stochastic gradient descent. Note that, the server does not have the training data; the regularization term is defined based on the local data at $i$-th client. The server maintains $u_0$, which can be seen as a ‘probe model’ from server to detect the representations of local data. As it is expensive to get access to the full gradient of $H_i(u_i,u_0)$, we leverage the batch of samples used to calculate the stochastic gradient of $f_i(u_i,v_i)$ to compute a stochastic gradient of $H_i(u_i,u_0)$. It will bring an additional stochastic noise of regularization term. To handle this issue, we further propose a partial variance (partial, because it is only applied to the stochastic gradient term related to regularization) reduction method to reduce the effect of the stochastic regularization. Note that, it has been previously observed that the application of variance reduction methods in neural network training is not successful [Defazio & Bottou (2019); Reddi et al. (2021); Li et al. (2023)] - therefore we wish to retain the randomness of stochastic gradient of \( f_i(u_i, v_i) \). As a result, we only apply the variance reduction technique on the stochastic regularization term. To avoid digression, this partial variance reduction part is delegated to Appendix C. ![Figure 1: An overview of FedReCo.](image) Our contributions are summarized as follows. - We propose a formulation of consensus optimization problem at the representation level to improve the flexibility of personalized federated learning (Sec. 3). The local models are trained locally except that their intermediate representations are forced to be close to each other by interacting with the server, retaining local knowledge to the maximum extent. - We propose a stochastic gradient descent (SGD) based algorithm to solve this *representation consensus problem*, abbreviated as FedReCo (*Federated Representation Consensus*). Then we provide a theoretical convergence analysis of FedReCo for general non-convex functions (Sec. 4 and Sec. 5). - Our algorithm is naturally private because clients share only their representation part with the server. Moreover, it is very noise resilient, and as a result can be easily adapted to a differential private variant to further protect data privacy, without loss of accuracy (see, Sec. 4.2 and Sec. 6.2). - We conduct experiments on several benchmark datasets to illustrate the effectiveness of our proposed algorithms. Our algorithm can outperform the existing methods in highly heterogeneous settings (Sec. 6.1). ## 2 RELATED WORKS **Personalized FL.** There are many strategies to achieve personalization in federated learning, including local fine-tuning [Wang et al. (2019); Collins et al. (2022)], meta-learning [Chen et al. (2018); Jiang et al. (2019); Fallah et al. (2020)], multi-task learning [Smith et al. (2017)], mixture of local and global model [Hanzely & Richtárik (2020); Deng et al. (2020); Mansour et al. (2020)], consensus based regularization [T Dinh et al. (2020); Li et al. (2021)]. In all of these methods, the model is considered as a whole and fully personalized at each client. **Consensus Based Regularization in FL.** Consensus based regularization has recently been applied in federated learning to force the local models to be close to each other. Notable works include FedProx [Li et al. (2020a)], that adds a proximal term to make the local model close to global model during the local training process, and Ditto [Li et al. (2021)], that has extended this idea to personalized federated learning. In addition, pFedMe [T Dinh et al. (2020)] proposes a bi-level problem based on similar proximal term and Moreau envelop as clients’ loss function. In [Li et al. (2019); Zhu & Ling (2022)], \( l_1 \)-norm based regularization has been proposed and proven to be robust to malicious attacks. Consensus optimization has also been studied in federated learning from a primal-dual view [Zhang et al. (2021)], including the alternating direction method of multipliers (ADMM) [Zhou & Li (2021); Huang et al. (2019)]. The regularization term in these works is generally based on the difference between local model and global model in the whole model level. **Shared Representation in FL.** The idea of partitioning a neural network into feature extractor and personalized prediction head has been applied to federated learning in [Arivazhagan et al. (2019)], which personalizes the last layer for different clients. The shared representation in linear regression problem and a convergence analysis has been given in [Collins et al. (2021); Oh et al. (2022)] only updates the feature... extractor with a randomly initialized prediction head, which is never updated in the training. Different from the shared feature extractor, the work of Liang et al. (2020) personalizes the first few layers and aggregates the last layer globally. Pillutla et al. (2022) considers a general framework of partial personalization in neural network training and establishes general convergence analysis for non-convex functions. Zhong et al. (2023) has extended the shared representation idea from different clients to different domains. Further, Shen et al. (2022) analyzes the differential privacy property for shared representation in federated learning. On top of shared representation, Xu et al. (2023) and Zhang et al. (2023) add a regularization term in the local training of shared feature extractor. These two works are the ones most related to this work. Specifically, Xu et al. (2023) exploits the centroid of the representations within one class to regularize the local training, while Zhang et al. (2023) uses the difference between global and local mutual information as the regularization term. The primary differences of our work compared to these are: 1) The feature extractor in Xu et al. (2023) and Zhang et al. (2023) is still the same for every client, although a regularization term is provided to constrain the update of feature extractor; 2) It is hard to provide privacy analysis on the regularization terms based on centroid of representations and mutual information. Being an SGD-type method our algorithm is on the other hand easily adapted to differential private versions; and 3) These prior works have not provided convergence analysis of their algorithm, while in this work we theoretically prove the convergence for our algorithms. 3 MOTIVATION AND PROBLEM STATEMENT 3.1 REPRESENTATION SIMILARITY ACROSS CLIENTS IN DIFFERENT LAYERS In federated learning, the heterogeneous data distribution can lead the local models to different directions through multiple local SGD steps. The FedAvg-like algorithms suffer from this “client drift”, which makes the local model far from global model within one communication round. In what follows, we will show the influence of client drift on the representations of different layers in one neural network model. We conduct an experiment on CIFAR10 dataset with a small 5-layer CNN model and ResNet18 [He et al. (2016)]. There are 10 clients, each with 2 classes of data in the CIFAR10 dataset. We train the models via the FedAvg algorithm [Li et al. (2020b)]. For each communication round, we perform two epochs of local SGD updates of the local model. After local iterations within one communication round $t$, we measure the similarity between the representations of local model $\omega^t$ and global model $\omega^t$ before aggregation. We use the centered kernel alignment (CKA) measurement [Kornblith et al. (2019); Nguyen et al. (2021); Li et al. (2023)] to quantify the similarity of representations. ![Figure 2: CKA Similarity for different layers.](image) Fig. 2 display the CKA similarity of representations after 1 round and 10 rounds of training, respectively. After only 1 round local training, the similarity decreases with deeper layers for both models. When training continues, the similarity between local model and global model increases for all the layers. After 10 rounds of training, the first four layers of 5-layer CNN model become close to global model, while the similarity of last (classifier) layer is still low. It shows that the FedAvg can learn a shared representation before the final classifier layer. However, even the previous layers are slightly dissimilar (similarity strictly less than 1). It is more obvious for larger ResNet18 model. Even after many rounds of training, the similarity decreases with deeper layers. The same phenomena has also been observed in [Li et al. (2023)] for a VGG model. This observation motivates our work to consider a framework to allow different feature extractors in different clients while still learning the shared representations. The classifier layer or prediction head has the largest difference between local model and global model, thus we wish to train it completely locally. For the feature extractor, we train it locally, but with a regularization term to force it to learn representations from a global model. 3.2 Representation Consensus Optimization Problem Let us know formally formulate our problem. Consider a federated learning system with $M$ clients, each client $i$ with $N$ samples $\{x_{ij} \in \mathbb{R}^{d_x}, y_{ij} \in \mathbb{R}^{d_y}\}_{j=1}^{N}, i = 1,...,M$. For a designed neural network, the model is partitioned into a feature extractor $u$ and a prediction head $v$. For $i$-th client, it maintains its own local model $u_i$ and $v_i$. And the server maintains a global feature extractor $u_0$. If we are to mandate that the representations of local data to be the same for local feature extractor and global feature extractor, then the representation consensus optimization problem would be, $$\min_{u_0, \{u_i, v_i\}_{i=1}^{M}} \sum_{i=1}^{M} f_i(u_i, v_i) \quad \text{s.t.} \quad h_{ij}(u_i) = h_{ij}(u_0), \quad i = 1,2,...,M, \quad j = 1,2,...,N$$ where $f_i(u_i, v_i) = \frac{1}{N} \sum_{j=1}^{N} f_i(u_i, v_i | x_{ij}, y_{ij})$ is the empirical loss function with $N$ samples, and $h_{ij}(u) \triangleq h_{ij}(x_{ij}|u)$ is the mapping function $\mathbb{R}^{d_x} \rightarrow \mathbb{R}^p$ that maps input $x_{ij}$ to a intermediate representation with dimension $p$. However we do not need the representations to be the exactly same for local feature extractor and global feature extractor. Thus we only put a $\ell_2$-norm regularization term to constrain the local training: $$\min_{u_0, \{u_i, v_i\}_{i=1}^{M}} F(u_0, \{u_i, v_i\}_{i=1}^{M}) \triangleq \frac{1}{M} \sum_{i=1}^{M} f_i(u_i, v_i) + \lambda \frac{1}{2M} \sum_{i=1}^{M} H_i(u_i, u_0)$$ (5) where $H_i(u_i, u_0) = \frac{1}{N} \sum_{j=1}^{N} \|h_{ij}(u_i) - h_{ij}(u_0)\|^2$ is the regularization term to force the representations of all the local data samples to be close for local feature extractors and global feature extractor. Since $f_i(u_i, v_i)$ and $H_i(u_i, u_0)$ are separable for each client $i$, we can also write the objective function as $F(u_0, \{u_i, v_i\}_{i=1}^{M}) = \frac{1}{M} \sum_{i=1}^{M} F_i(u_0, u_i, v_i)$ where $F_i(u_0, u_i, v_i) = f_i(u_i, v_i) + \frac{\lambda}{2} H_i(u_i, u_0)$. The regularization term $H_i(u_i, u_0)$ is fully defined on the local dataset at $i$-th client. The server cannot know the local data samples and can only send the global feature extractor to the clients to “detect” local information. 4 FedReCo Algorithm 4.1 Algorithm Description Since both $f_i(u_i, v_i)$ and $H_i(u_i, u_0)$ are based on the local data samples, we can apply stochastic gradient descent (SGD) to solve the problem of eq. (5). We can exploit the same batch of data samples to calculate the stochastic gradients of $f_i(u_i, v_i)$ and $H_i(u_i, u_0)$ simultaneously, just passing the same batch of samples twice to model $\{u_i, v_i\}$ and feature extractor $u_0$, respectively. The global feature extractor only appears in the regularization term $H_i(u_i, u_0)$; thus we can just send the stochastic gradient of $H_i(u_i, u_0)$ with respect to $u_0$ to the server to update $u_0$ (this leads to a faster algorithm). To reduce the communication burden, we can perform multiple local SGD steps before transmitting the stochastic gradient. And due to the decoupling of feature extractor and prediction head, we can apply different learning rates and numbers of local steps to the two parts, respectively. In the following we use symbol $\tilde{\nabla}$ to represent stochastic gradient. Specifically, our proposed FedReCo (Representation Consensus) algorithm is as follows: At each communication round $t$, the server broadcasts the $u_0$ to all the clients. Each client first updates the local prediction head $v_i$ via $K_v$ SGD local steps with learning rate $\eta_v$ as: $$v_i^{t+1} = v_i^t - \eta_v \sum_{k=0}^{K_v-1} \tilde{\nabla}_{v_i} f_i(v_i^{t,k}, u_i^t),$$ (6) Algorithm 1 FedReCo Algorithm Input: Step size $\eta_u, \eta_v, \eta_0$, penalty parameter $\lambda$ Initialize: Initialize $u^0_0$ for server, initialize $u_i$ and $v_i$ for $i$-th client 1: for $t = 0, 1, ..., T - 1$ do 2: Server: 3: Broadcast $u^t_0$ to all the clients 4: Receive stochastic gradient $\tilde{\nabla}_{u_0} H_i(u^{t+1}_i, u^t_0)$ from all the clients 5: Update $u_0$: $u^{t+1}_0 = u^t_0 - \eta_0 \frac{1}{M} \sum_{i=1}^{M} \tilde{\nabla}_{u_0} H_i(u^{t+1}_i, u^t_0)$ 6: client $i$: 7: Receive $u^t_0$ from server, let $u^{t,0}_i = u^t_i$ 8: for $k = 0, 1, ..., K_v - 1$ do 9: Randomly select one batch of samples, pass the samples to the model $\{u^t_i, v^{t,k}_i\}$ and calculate stochastic gradient $\tilde{\nabla}_{v_i} f_i(v^{t,k}_i, u^t_i)$ 10: Update $v^{t,k+1}_i = v^{t,k}_i - \eta_v \tilde{\nabla}_{v_i} f_i(v^{t,k}_i, u^t_i)$ 11: end for 12: Let $v^{t+1}_i = v^{t,K_v}_i$ 13: for $k = 0, 1, ..., K_u - 1$ do 14: Randomly select one batch of samples, pass the samples to model $\{u^{t,k}_i, v^{t+1}_i\}$ and calculate stochastic gradients $\tilde{\nabla}_{u_i} f_i(v^{t+1}_i, u^{t,k}_i)$, pass the same batch of $t$ samples to feature extractor $u^t_0$ and calculate stochastic gradient $\tilde{\nabla}_{u_i} H_i(u^{t,k}_i, u^t_0)$ 15: Update $u^{t,k+1}_i = u^{t,k}_i - \eta_u \left( \tilde{\nabla}_{u_i} f_i(v^{t+1}_i, u^{t,k}_i) + \frac{\lambda}{2} \tilde{\nabla}_{u_i} H_i(u^{t,k}_i, u^t_0) \right)$ 16: end for 17: Let $u^{t+1}_i = u^{t,K_u}_i$ 18: Randomly select one batch of samples and pass them to $u^{t+1}_i$ and $u^t_0$, calculate the stochastic gradient $\tilde{\nabla}_{u_0} H_i(u^{t+1}_i, u^t_0)$ and send it to the server 19: end for where $v^{t,0}_i \equiv v^t_i$. Then the client fixes $v_i$ and updates local feature extractor $u_i$ via $K_u$ local steps with learning rate $\eta_u$ as $$u^{t+1}_i = u^t_i - \eta_u \sum_{k=0}^{K_u-1} \left( \tilde{\nabla}_{u_i} f_i(v^{t+1}_i, u^{t,k}_i) + \frac{\lambda}{2} \tilde{\nabla}_{u_i} H_i(u^{t,k}_i, u^t_0) \right),$$ where again $u^{t,0}_i \equiv u^t_i$. After local training, the client calculates the stochastic gradient $\tilde{\nabla}_{u_0} H_i(u^{t+1}_i, u^t_0)$ and sends it to server. The server aggregates the stochastic gradients and updates $u_0$ with a server learning rate $\eta_0$ as $$u^{t+1}_0 = u^t_0 - \eta_0 \frac{1}{M} \sum_{i=1}^{M} \tilde{\nabla}_{u_0} H_i(u^{t+1}_i, u^t_0).$$ The details of FedReCo algorithm are described in Algorithm 4.1 4.2 Computation, Communication, and Privacy For local training in FedReCo, each client needs to pass the same batch of samples to two models and calculate the stochastic gradients. The local training burden does not increase too much compared to FedAvg. Although the global feature extractor enlarges the demand of local computation and memory, the local computation power is not usually the bottleneck in the whole system. For the communication stage, each client needs to send a stochastic gradient $\tilde{\nabla}_{u_0} H_i(u^{t+1}_i, u^t_0)$ to the server, which only includes the gradients of model parameters in feature extractor, less than the whole model. It is like a sparsification method on the gradient, but with fixed selected dimensions. Therefore the size of the transmitted information is actually much lighter than FedAvg and prior consensus optimization works, improving the communication efficiency. The server only receives the gradient of the norm of difference between local representations and global representations, which makes it harder to directly infer the local data, ensuring a more private setting. Nevertheless, we can still easily adapt the FedReCo algorithm to differential private version by injecting the Gaussian noise to the stochastic gradient $\nabla_{u_i} H_i(u_i^{t+1}, u_i^t)$. Since the noise is added to the gradient of a penalty term on the difference between global representations and local representations, not to the gradient of local training function itself, the influence of the noise is pretty small. We can see from the experiments (see Sec. 6.2) that our algorithm can accommodate very high level privacy requirement of differential privacy. The privacy analysis is a straightforward extension of the standard Gaussian mechanism, thus omitted here. As noted in the related works, FedPAC [Xu et al., 2023] and FedCR [Zhang et al., 2023] also add regularization terms in shared feature extractor. However, both methods need to send additional information to the server, enlarging the communication burden. Plus FedCR needs to estimate the distribution of local representations, which requires much more local computations (see Sec. 6.1). Finally it is not straightforward to add noise to the centroid or mutual information of the distributions, hence both works are not amenable to a privacy protection mechanism. 5 CONVERGENCE ANALYSIS In this section we provide a theoretical convergence analysis of our proposed FedReCo algorithm. We first give the assumptions needed for theoretical analysis of our algorithm, and then provide a convergence result. **Assumption 1 (Smoothness).** We assume the smoothness of the loss function with respect to different parameters in the local model and global feature extractor. - For each $i = 1, 2, \ldots, M$, the gradient $\nabla_{u_i} f_i(u_i, v_i)$ is $L_{fu}$-Lipschitz with respect to $u_i$ and $L_{fuv}$-Lipschitz with respect to $v_i$. Similarly, for each $i = 1, 2, \ldots, M$, the gradient $\nabla_{v_i} f_i(u_i, v_i)$ is $L_{fv}$-Lipschitz with respect to $v_i$ and $L_{fuv}$-Lipschitz with respect to $u_i$. - For each $i = 1, 2, \ldots, M$, the gradient $\nabla_{u_i} H_i(u_i, u_0)$ is $L_{Hu}$-Lipschitz with respect to $u_i$ and $L_{Huu}$-Lipschitz with respect to $u_0$. Similarly, the gradient $\nabla_{u_0} H_i(u_i, u_0)$ is $L_{Hu0}$-Lipschitz with respect to $u_0$ and $L_{Huu}$-Lipschitz with respect to $u_i$. **Assumption 2 (Bounded variance of stochastic gradients).** For each client $i = 1, 2, \ldots, M$, its stochastic gradient is unbiased and the variance of the stochastic gradient is upper-bounded by: $$\mathbb{E}\left[\|\nabla_{u_i} f_i(u_i, v_i) - \nabla_{u_i} f_i(u_i, v_i)\|^2\right] \leq \sigma_u^2, \quad \mathbb{E}\left[\|\nabla_{v_i} f_i(u_i, v_i) - \nabla_{v_i} f_i(u_i, v_i)\|^2\right] \leq \sigma_v^2, \quad i = 1, \ldots, M$$ $$\mathbb{E}\left[\|\nabla_{u_i} H_i(u_i, u_0) - \nabla_{u_i} H_i(u_i, u_0)\|^2\right] \leq \sigma_H^2, \quad \mathbb{E}\left[\|\nabla_{u_0} H_i(u_i, u_0) - \nabla_{u_0} H_i(u_i, u_0)\|^2\right] \leq \sigma_H^2, \quad i = 1, \ldots, M.$$ The assumptions [1, 2] are standard in non-convex convergence analysis. Note that, we do not need a bound of heterogeneity since our method is full personalized and fits to arbitrary heterogeneous settings. We also do not need any bounded gradient assumption which is common in some literature. In the following we let $U^t \triangleq [u_1^t, \ldots, u_M^t]$, and $V^t \triangleq [v_1^t, \ldots, v_M^t]$. For the measurements of convergence, we define $$\Gamma_1^t = \|\nabla_{u_0} F(U^t, u_0^t)\|^2, \quad \Gamma_2^t = \frac{1}{M} \sum_{i=1}^M \|\nabla_{u_i} F_i(u_i^t, v_i^t, u_0^t)\|^2, \quad \Gamma_3^t = \frac{1}{M} \sum_{i=1}^M \|\nabla_{v_i} f_i(u_i^t, u_i^t)\|^2$$ If the three sequences converge to zero with $t$ in expectation, then we can obtain the convergence of $F$ in expectation to a stationary point. Throughout this paper we will denote by $F_{\min}$ the minimum of $F$ over its domain. For the FedReCo algorithm described in Algorithm 4.1, we have the following convergence result. **Theorem 1 (Convergence of FedReCo).** Suppose that Assumptions 1 and 2 hold. Let $L_1^2 = 2L_{fu}^2 + \frac{\lambda^2}{2} L_{Hu}^2$, $\sigma_1^2 = 2\sigma_u^2 + \frac{\lambda^2 \sigma_H^2}{2}$. If learning rates satisfy $\eta_0 = \frac{\eta}{L_{Hu}}$, $\eta_u = \frac{\eta}{L_{Hu} K_u}$, $\eta_v = \frac{\eta}{L_{fv} K_v}$ and $\eta$ is chosen on the parameters $\lambda, L_{Hu}, L_1, L_{fu}, L_{Huu}, L_{fuv}, \sigma_H, \sigma_u, \sigma_v$, then ignoring absolute constants, we have: $$\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}\left(\frac{1}{4L_{Hu}} \Gamma_1^t + \frac{1}{16L_1} \Gamma_2^t + \frac{1}{8L_{fv}} \Gamma_3^t\right) \lesssim \frac{\Sigma_1^2}{\sqrt{T}} + \frac{\Sigma_2^2}{T^{\frac{3}{2}}} + O\left(\frac{1}{T}\right)$$ (9) where \[ \Sigma_1 = \frac{\lambda^3}{16M} \frac{\sigma_H^2}{L_{Hu}} + \frac{3}{2} \frac{\sigma_1^2}{L_1} + \frac{3}{2} \frac{\sigma_v^2}{L_{fv}}, \quad \Sigma_2 = \frac{3}{20} \left( \lambda^2 \frac{L_{Hu}^2}{L_{Hu} L_1} \sigma_1^2 + \frac{L_{fv}^2}{L_1 L_{fv}} \sigma_v^2 \right). \] are positive constants depending on Lipschitz constants and stochastic variance. The proofs are deferred in Appendix B. The left-hand side of (9) is a weighted sum of measurements \( \Gamma_1, \Gamma_2, \Gamma_3 \), that converges to zero with iterations. The decaying rate on the right-hand side is standard in non-convex SGD, and depends on the stochastic variances \( \sigma_H, \sigma_u, \sigma_v \). In the \( \Sigma_1 \) at \( O(1/\sqrt{T}) \) term, \( \sigma_H^2 \) is divided by \( M \), the number of clients. That’s because \( u_0 \) is trained by aggregating the stochastic gradients from all the clients, while \( u_i \) and \( v_i \) are trained only locally. The data-centric regularization term brings an additional error, which can be reduced by a partial variance reduction technique described in Appendix C. The \( \sigma_H^2 \) can be removed in the theoretical result of partial variance reduction, and we can further achieve \( O(1/T) \) convergence rate by applying the variance reduction on both \( f_i(u_i, v_i) \) function and regularization term \( H_i(u_i, u_0) \). However, we found that in the practical training of neural network, the partial variance reduction brings almost no improvement. The performance of FedReCo is good even when the batch size is small, and sometimes slightly better than the variance-reduced version. This shows that FedReCo is robust to the additional error produced by the regularization term. 6 EXPERIMENTS In this section we experimentally compare FedReCo with other recent personalized federated learning algorithms to show the effectiveness of our algorithm, and also show the privacy advantage of FedReCo. 6.1 PERFORMANCE ON BENCHMARK DATASETS We perform the experiments on FashionMNIST/FMNIST and CIFAR10 datasets with a 5-layer CNN model, with two convolution layers and three fully connected layers. The first four layers are considered as the feature extractor and one last classifier layer as the prediction head trained totally locally. The compared methods include: FedAvg [McMahan et al., 2017], FedAvg-FineTuning (FT) [Collins et al., 2022], Ditto [Li et al., 2021], FedRep [Collins et al., 2021], FedBabu [Oh et al., 2022], FedPAC [Xu et al., 2023], FedCR [Zhang et al., 2023]. There are 50 clients in the network, each with 4 classes of data for FMNIST dataset and 2 classes of data for CIFAR10 dataset, to form a heterogeneous data distribution. The results are obtained after 500 rounds of communication, each with local SGD updates for 2 epochs of local samples, 1 epoch on training local prediction head, 1 epoch on training local feature extractor. More details of settings and hyper-parameters are provided in Appendix A. For the relatively simple dataset FMNIST, FedAvg can already get an acceptable accuracy, and other algorithms obtain similar final accuracy. Note that in this case the FedAvg+fine-tuning is competitive to other methods, getting the highest accuracy. For the more complex CIFAR10 dataset and more heterogeneous setting, fine-tuning is still competitive to some personalization methods, with FedReCo outperforming all compared methods, showing the higher flexibility to more heterogeneous setting. | | FEDAVG | FEDAVG-FT | DITTO | FEDREP | FEDBABU | FEDPAC | FEDCR | FEDRECO | |----------|--------|-----------|-------|--------|---------|--------|-------|---------| | F | 85.38 | **93.85** | 92.92 | 93.04 | 92.85 | 92.62 | 92.71 | **93.09** | | C | 56.17 | 89.05 | 90.55 | 89.12 | 85.69 | 88.71 | 89.21 | **91.07** | To compare efficiency, Fig. 3(a) displays the test accuracy of different algorithms with varying communication rounds, and Table 2 shows the running time of different algorithms when they achieve 85% accuracy on CIFAR10 dataset. The experiments are done in a single NVIDIA-A100-PCIE-40GB GPU, to simulate multiple clients. Note that, Table 2 only reflects the local computation cost, and does not include communication cost. It can be seen that Ditto can achieve the same accuracy with least time. FedReCo requires slightly more time in local computation as it needs to compute the representations twice for local model and global feature extractor, but gets higher final accuracy. Plus FedReCo communicates less than Ditto which transmits the whole model. Compared to other methods using the partition of neural network, FedReCo is faster: FedPAC needs more rounds of iteration to achieve the same accuracy, and FedCR spends orders of magnitude more time on local computation. Table 2: Running time to achieve 85% Accuracy on CIFAR10 dataset | | DITTO | FEDREP | FEDBABU | FEDPAC | FEDCR | FEDRECO | |--------|---------|--------|---------|--------|--------|---------| | Time | 52 min 41 s | 130 min 2 s | 317 min 51 s | 334 min 46 s | 1736 min 7 s | 61 min 56 s | Figure 3: (a) Test accuracy on CIFAR10 dataset. (b) Test accuracy with differential privacy on CIFAR10 dataset. 6.2 ROBUSTNESS AND DIFFERENTIAL PRIVACY We further explore the impact of differential privacy on our proposed algorithm. We use $(\epsilon, \delta)$-local differential privacy to reduce the risk of compromising local data and apply the standard Gaussian mechanism to add noise to the transmitted information [Dwork & Roth, 2014; Abadi et al., 2016]. For FedReCo, we add Gaussian noise to the stochastic gradients of regularization term. To compare, we use FedAvg and add the Gaussian noise to the model difference within one round of local training, which is the information to be transmitted from a client to server for aggregation. More experiment details can be found in Appendix A.3. Fig. 3(b) shows the test accuracy with the number of communication rounds with Gaussian noise, and Table 3 displays the final accuracy of FedAvg, FedAvg-FT and FedReCo after 500 of communication rounds. We can see the FedReCo is almost not influenced by the added Gaussian noise, even when the $\epsilon$ and $\delta$ is pretty small, while FedAvg suffers a lot from the added noise. FedAvg with fine-tuning also suffers from the noise since the model trained by FedAvg cannot learn the local knowledge well with the added noise. This suggests a huge advantage to do optimization on the representation level, not at the model level, to be more robust to perturbations. Table 3: Test Accuracy (%) with $(\epsilon, \delta)$-differential privacy | Dataset | FedAvg $(\epsilon=0.2, \delta=0.1)$ | FedAvg-FT $(\epsilon=0.2, \delta=0.1)$ | FedReCo $(\epsilon=0.2, \delta=0.1)$ | FedReCo $(\epsilon=0.05, \delta=0.05)$ | |---------|-----------------------------------|-------------------------------------|----------------------------------|------------------------------------| | FMNIST | 8.43 | 69.00 | 93.14 | 93.15 | | CIFAR10 | 10.42 | 64.25 | 90.95 | 90.93 | 7 CONCLUSIONS We have proposed a federated learning algorithm, FedReCo, that enforces the representation part of local models to be similar in a data-driven manner. While being superior in accuracy and efficiency to many other methods, FedReCo is also noise-robust and can be made differentially private without degradation. FedReCo takes a step to study how layer sensitivity in neural networks can be fully exploited in federated learning, which hopefully will result in further interesting works. In fact, the framework of FedReCo can be easily extended to the partition of neural network at any layer, not limited to last classifier layer, and even to partitioning at different layers for different clients. REFERENCES Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318, 2016. Manoj Ghuhan Arivazhagan, Vinay Aggarwal, Aaditya Kumar Singh, and Sunav Choudhary. Federated learning with personalization layers. arXiv preprint arXiv:1912.00818, 2019. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013. Fei Chen, Mi Luo, Zhenhua Dong, Zhenguo Li, and Xiuqiang He. Federated meta-learning with fast convergence and efficient communication. arXiv preprint arXiv:1802.07876, 2018. Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Exploiting shared representations for personalized federated learning. In International conference on machine learning, pp. 2089–2099. PMLR, 2021. Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Fedavg with fine tuning: Local updates lead to representation learning. Advances in Neural Information Processing Systems, 35:10572–10586, 2022. Aaron Defazio and Léon Bottou. On the ineffectiveness of variance reduced optimization for deep learning. Advances in Neural Information Processing Systems, 32, 2019. Yuyang Deng, Mohammad Mahdi Kamani, and Mehrdad Mahdavi. Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461, 2020. Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3):211–407, 2014. Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. Advances in Neural Information Processing Systems, 33:3557–3568, 2020. Filip Hanzely and Peter Richtárik. Federated learning of a mixture of global and local models. arXiv preprint arXiv:2002.05516, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Zonghao Huang, Rui Hu, Yuanxiong Guo, Eric Chan-Tin, and Yanmin Gong. Dp-admm: Admm-based distributed learning with differential privacy. IEEE Transactions on Information Forensics and Security, 15:1002–1012, 2019. Yihan Jiang, Jakub Konečný, Keith Rush, and Sreeram Kannan. Improving federated learning personalization via model agnostic meta learning. arXiv preprint arXiv:1909.12488, 2019. Peter Kairouz and H Brendan McMahan. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1):1–210, 2021. ISSN 1935-8237. doi: 10.1561/2200000083. URL http://dx.doi.org/10.1561/2200000083 Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, pp. 5132–5143. PMLR, 2020. Jakub Konečný, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492, 2016. Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. In International conference on machine learning, pp. 3519–3529. PMLR, 2019.
qOFLn0pMoe
With regards to Nguyen et al. 2023 how to compare different assumptions? This paper assumes unique solution and Nguyen et al 2023 assumes $\nabla f(x^*)=0$ in constrained case. Which one is more restrictive?
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise Anonymous authors Paper under double-blind review Abstract High-probability analysis of stochastic first-order optimization methods under mild assumptions on the noise has been gaining a lot of attention in recent years. Typically, gradient clipping is one of the key algorithmic ingredients to derive good high-probability guarantees when the noise is heavy-tailed. However, if implemented naively, clipping can spoil the convergence of the popular methods for composite and distributed optimization (Prox-SGD/Parallel SGD) even in the absence of any noise. Due to this reason, many works on high-probability analysis consider only unconstrained non-distributed problems, and the existing results for composite/distributed problems do not include some important special cases (like strongly convex problems) and are not optimal. To address this issue, we propose new stochastic methods for composite and distributed optimization based on the clipping of stochastic gradient differences and prove tight high-probability convergence results (including nearly optimal ones) for the new methods. Using similar ideas, we also develop new methods for composite and distributed variational inequalities and analyze the high-probability convergence of these methods. 1 Introduction Many recent works on stochastic optimization have the ultimate goal of bridging the theory and practice in machine learning. This is mostly reflected in the attempts at the theoretical analysis of optimization methods under weaker assumptions than the standard ones. Moreover, some phenomena cannot be explained using classical in-expectation convergence analysis (see the motivating example from (Gorbunov et al., 2020a)) that results in the growing interest in more accurate ways to the analysis of stochastic methods, for example, high-probability convergence analysis. However, despite the significant attention to this topic (Nazin et al., 2019; Davis et al., 2021; Gorbunov et al., 2020a; 2022a; Cutkosky & Mehta, 2021; Sadiev et al., 2023; Nguyen et al., 2023b; Liu & Zhou, 2023; Liu et al., 2023), several important directions remain unexplored. In particular, all mentioned works either consider unconstrained problems or consider general composite/constrained minimization/variational inequality problems but have some noticeable limitations, such as bounded domain assumption, extra logarithmic factors in the complexity bounds, not optimal (not accelerated) convergence rates, or no analysis of (quasi-) strongly convex (monotone) case. The importance of composite/constrained formulations for the machine learning community can be justified in many ways. For example, composite optimization and distributed optimization have a lot of similarities, i.e., one can view a distributed optimization problem as a special composite optimization problem (Parikh & Boyd, 2014). Due to the large sizes of modern machine learning models and datasets, many important problems can be solved in a reasonable time only via distributed methods. Next, composite formulations are very useful for handling different regularizations popular in machine learning and statistics (Zou & Hastie, 2005; Shalev-Shwartz & Ben-David, 2014; Beck, 2017). Finally, variational inequalities are usually considered with constraints as well. The discrepancy between the importance of composite/constrained formulations and the lack of high-probability convergence results in this setup can be partially explained as follows. SOTA high-probability convergence results are derived for the algorithms that use gradient clipping (Pascanu... i.e., the clipping operator defined as \( \text{clip}(x, \lambda) = \min\{1, \frac{\lambda}{\|x\|}\}x \) for \( x \neq 0 \) and \( \text{clip}(0, \lambda) = 0 \) with some clipping level \( \lambda > 0 \) is applied to the stochastic gradients. If \( \lambda \) is too small, then naïve Proximal Gradient Descent with gradient clipping is not a fixed point method, i.e., the method escapes the solution even if it is initialized there (see a technical explanation in Section 2). This fact implies that one has either to increase the clipping level or to decrease the stepsize to converge to the exact solution asymptotically; the latter approach leads to a slower convergence rate. On the other hand, even in the unconstrained case, the existing results with acceleration/linear convergence are derived for the methods using decreasing clipping level (Gorbunov et al., 2020a; Sadiev et al., 2023). Therefore, new algorithms and analyses are required to handle this issue. In this work, we close this gap by proposing new stochastic methods for composite and distributed problems via the clipping of gradient differences that converge to zero with high probability. This allows us to achieve the desirable acceleration and linear convergence. Before we move on to the presentation of the main contributions, we need to introduce the problem settings formally. ### 1.1 Setup **Notation.** The standard Euclidean norm of vector \( x \in \mathbb{R}^d \) is denoted as \( \|x\| = \sqrt{\langle x, x \rangle} \). \( B_R(x) = \{y \in \mathbb{R}^d \mid \|y - x\| \leq R\} \) is the ball centered at \( x \) with radius \( R \). Bregman divergence w.r.t. function \( f \) is denoted as \( D_f(x, y) \overset{\text{def}}{=} f(x) - f(y) - \langle \nabla f(y), x - y \rangle \). In \( O(\cdot) \), we omit the numerical factors, and in \( \tilde{O}(\cdot) \), we omit numerical and logarithmic factors. For natural \( n \geq 1 \) the set \( \{1, 2, \ldots, n\} \) is denoted as \([n]\). Finally, we use \( \mathbb{E}_\xi[\cdot] \) to denote the expectation w.r.t. the randomness coming from \( \xi \). **Considered problems.** The first class of problems we consider in this work is stochastic composite minimization problems: \[ \min_{x \in \mathbb{R}^d} \{ \Phi(x) = f(x) + \Psi(x) \}, \] where \( f(x) = \mathbb{E}_{\xi \sim D}[f_\xi(x)] \) is a differentiable function satisfying some properties to be defined later and \( \Psi(x) \) is a proper, closed, convex function (composite/regularization term). The examples of problem (1) arise in various applications, e.g., machine learning (Shalev-Shwartz & Ben-David, 2014), signal processing (Combettes & Pesquet, 2011), image processing (Luke, 2020). We also consider variational inequality problems, see Appendix C. The distributed version of (1) has the following structure of \( f \): \[ f(x) = \frac{1}{n} \sum_{i=1}^{n} \{ f_i(x) = \mathbb{E}_{\xi_i \sim D_i}[f_{\xi_i}(x)] \}. \] In this case, there are \( n \) workers connected in a centralized way with some parameter server; worker \( i \) can query some noisy information (stochastic gradients/estimates) about \( f_i \). **In-expectation and high-probability convergence.** In-expectation convergence guarantees provide the upper bounds on the number of iterations/oracle calls \( K = \tilde{K}(\varepsilon) \) for a method needed to find point \( x^K \) such that \( \mathbb{E}[C(x^K)] \leq \varepsilon \) for given convergence criterion \( C(x) \) (e.g., \( C(x) \) can be \( f(x) - f(x^*) \), \( \|x - x^*\|^2 \), \( \|\nabla f(x)\|^2 \)) and given accuracy \( \varepsilon > 0 \). High-probability convergence guarantees give the upper bounds on the number of iterations/oracle calls \( K = K(\varepsilon, \beta) \) for a method needed to find point \( x \) such that \( \mathbb{P}\{C(x^K) \leq \varepsilon\} \geq 1 - \beta \), where \( \beta \in (0, 1) \) is a confidence level. It is worth noting that Markov’s inequality implies \( \mathbb{P}\{C(x^K) > \varepsilon\} < \mathbb{E}[C(x^K)]/\varepsilon \), meaning that it is sufficient to take \( K = K(\beta\varepsilon) = K \): \( \mathbb{P}\{C(x^K) > \varepsilon\} < \mathbb{E}[C(x^K)]/\varepsilon \leq \beta \). However, this typically leads to the polynomial dependence on \( 1/\beta \) that significantly spoils the complexity of the method when \( \beta \) is small. Therefore, we focus on the high-probability convergence guarantees that depend on \( 1/\beta \) poly-logarithmically. Moreover, such high-probability results are more sensitive to the noise distribution (and, thus, more accurate) than in-expectation ones (Gorbunov et al., 2020a; Sadiev et al., 2023). **Proximal operator.** We assume that function \( \Psi(x) \) has a relatively simple structure such that one can efficiently compute proximal operator: \( \text{prox}_{\gamma\Psi}(x) = \arg\min_{y \in \mathbb{R}^d} \{\gamma\Psi(y) + \frac{1}{2}\|y - x\|^2\} \). For the properties of the proximal operator and examples of functions \( \Psi(x) \) such that \( \text{prox}_{\gamma\Psi}(x) \) can be easily computed, we refer the reader to (Beck, 2017). **Bounded central $\alpha$-th moment.** We consider the situation when $f_i$ and $F_i$ are accessible through the stochastic oracle calls. The stochastic estimates satisfy the following assumption.\footnote{Following (Sadiev et al., 2023), we consider all assumptions only on some bounded set $Q \subseteq \mathbb{R}^d$: the diameter of $Q$ depends on the starting point. We emphasize that we do not assume boundedness of the domain of the original problem. Instead, we prove via induction that the iterates of the considered methods stay in some ball around the solution with high probability (see the details in Section 3). Thus, it is sufficient for us to assume everything just on this ball, though our analysis remains unchanged if we introduce all assumptions on the whole domain.} **Assumption 1.** There exist some set $Q \subseteq \mathbb{R}^d$ and values $\sigma \geq 0$, $\alpha \in (1, 2]$ such that for all $x \in Q$ we have $\mathbb{E}_{\xi_i \sim D_i}[\nabla f_{\xi_i}(x)] = \nabla f_i(x)$ and $$\mathbb{E}_{\xi_i \sim D_i}\left[||\nabla f_{\xi_i}(x) - \nabla f_i(x)||^\alpha\right] \leq \sigma^\alpha.$$ (3) For $\alpha = 2$, Assumption 1 reduces to the bounded variance assumption, and for $\alpha \in (1, 2)$ variance of the stochastic estimator can be unbounded, e.g., the noise can have Lévy $\alpha$-stable distribution (Zhang et al., 2020b), which is heavy-tailed. **Assumptions on $f_i$.** We assume that functions $\{f_i\}_{i \in [n]}$ are $L$-smooth. **Assumption 2.** We assume that there exist some set $Q \subseteq \mathbb{R}^d$ and constant $L > 0$ such that for all $x, y \in Q$, $i \in [n]$ and for all $x^* \in \arg\min_{x \in \mathbb{R}^d} \Phi(x)$ $$||\nabla f_i(x) - \nabla f_i(y)|| \leq L||x - y||,$$ $$||\nabla f_i(x) - \nabla f_i(x^*)||^2 \leq 2L(f_i(x) - f_i(x^*) - \langle \nabla f_i(x^*), x - x^* \rangle).$$ (4) (5) As noted in Appendix B from (Sadiev et al., 2023), (5) is satisfied on the set $Q \neq \mathbb{R}^d$ if (4) holds on a slightly larger set in the case of $\Psi \equiv 0$, $n = 1$ (unconstrained single-node case). For simplicity, we assume that both (4) and (5) hold on $Q$. This is always the case for $L$-smooth functions on $Q = \mathbb{R}^d$ when $\Psi \equiv 0$, $n = 1$. In a more general situation, condition (5) can be viewed as an assumption on the structured non-convexity of $\{f_i\}_{i \in [n]}$. Finally, if $\{f_i\}_{i \in [n]}$ are convex and $L$-smooth on the whole domain of the problem (1), then Assumption 2 holds. Next, for each particular result about the convergence of methods for (1), we make one of the following assumptions. **Assumption 3.** There exist some set $Q \subseteq \mathbb{R}^d$ and constant $\mu \geq 0$ such that $f$ is $\mu$-strongly convex: $$f(y) \geq f(x) + \langle \nabla f(x), y - x \rangle + \frac{\mu}{2}||y - x||^2 \quad \forall x, y \in Q.$$ (6) When $\mu = 0$, function $f$ is called convex on $Q$. This is a standard assumption for optimization literature (Nesterov et al., 2018). We also consider a relaxation of strong convexity. **Assumption 4.** There exist some set $Q \subseteq \mathbb{R}^d$ and constant $\mu \geq 0$ such that $f_1, \ldots, f_n$ are $(\mu, x^*)$-quasi-strongly convex for all $x^* \in \arg\min_{x \in \mathbb{R}^d} \Phi(x)$: $$f_i(x^*) \geq f_i(x) + \langle \nabla f_i(x), x^* - x \rangle + \frac{\mu}{2}||x - x^*||^2 \quad \forall x \in Q, i \in [n].$$ (7) Condition (7) is weaker than (6) and holds even for some non-convex functions (Necoara et al., 2019). ### 1.2 Our Contributions - **Methods with clipping of gradient differences for distributed composite minimization.** We develop two stochastic methods for composite minimization problems – Proximal Clipped SGD with shifts (Prox-clipped-SGD-shift) and Proximal Clipped Similar Triangles Method with shifts (Prox-clipped-SSTM-shift). Instead of clipping stochastic gradients, these methods clip the difference between the stochastic gradients and the shifts that are updated on the fly. This trick allows us to use decreasing clipping levels, and, as a result, we derive the first accelerated high-probability convergence rates and tight high-probability convergence rates for the non-accelerated method in the Table 1: Summary of known and new high-probability complexity results for solving (non-) composite (non-) distributed smooth optimization problem (1). Column “Setup” indicates the assumptions made in addition to Assumptions 1 and 2. All assumptions are made only on some ball around the solution with radius \( R \), \( \| x^0 - x^* \| \). Complexity is the number of stochastic oracle calls (per worker) needed for a method to guarantee that \( P[\text{Metric} \leq \varepsilon] \geq 1 - \beta \) for some \( \varepsilon > 0 \), \( \beta \in (0, 1] \) and “Metric” is taken from the corresponding column. Numerical and logarithmic factors are omitted for simplicity. Column “C?” shows whether the problem (1) is composite; “D?” indicates whether the problem (1) is distributed. Notation: \( L \) = Lipschitz constant; \( \sigma \) = parameter from Assumption 1; \( R \) = any upper bound on \( \| x^0 - x^* \| \); \( \zeta_* = \sqrt{\frac{1}{n} \sum_{i=1}^{n} \| \nabla f_i(x^*) \|^2} \); \( \tilde{R}^2 = R(3R + L^{-1}(2\eta\sigma + \| \nabla f(x^0) \|)) \) for some \( \eta > 0 \) (for the result from Nguyen et al., 2023a); one can show that \( \tilde{R}^2 = \Theta(R^2 + R\zeta_*^2/L) \) when \( n = 1 \), see the discussion after Theorem 2.3); \( \mu \) = (quasi-)strong convexity parameter. The results of this paper are highlighted in blue. | Setup | Method | Metric | Complexity | C? | D? | |-------|--------|--------|------------|----|----| | As. 3 (\( \mu = 0 \)) | clipped-SGD (Sadiev et al., 2023) | \( f(\bar{x}^K) - f(x^*) \) | \( \max \left\{ \frac{LR^2}{\varepsilon}, \left( \frac{\sigma R}{\varepsilon} \right)^{\frac{1}{\alpha-1}} \right\} \) | X | X | | | clipped-SSTM (Sadiev et al., 2023) | \( f(y^K) - f(x^*) \) | \( \max \left\{ \frac{LR^2}{\varepsilon}, \left( \frac{\sigma R}{\varepsilon} \right)^{\frac{1}{\alpha-1}} \right\} \) | X | X | | | Clipped-SMD (Nguyen et al., 2023a) | \( \Phi(\bar{x}^K) - \Phi(x^*) \) | \( \max \left\{ \frac{LR^2}{\varepsilon}, \left( \frac{\sigma R}{\varepsilon} \right)^{\frac{1}{\alpha-1}} \right\} \) | ✓ | ✓ | | | Clipped-ASMD (Nguyen et al., 2023a) | \( \Phi(y^K) - \Phi(x^*) \) | \( \max \left\{ \frac{LR^2}{\varepsilon}, \left( \frac{\sigma R}{\varepsilon} \right)^{\frac{1}{\alpha-1}} \right\} \) | ✓ | X | | | DPProx-clipped-SGD-shift Theorem 2.3 | \( \Phi(\bar{x}^K) - \Phi(x^*) \) | \( \max \left\{ \frac{LR^2}{\varepsilon}, \frac{H\zeta_*}{\sqrt{n\varepsilon}}, \frac{1}{n} \left( \frac{\sigma R}{\varepsilon} \right)^{\frac{1}{\alpha-1}} \right\} \) | ✓ | ✓ | | | DPProx-clipped-SSTM-shift Theorem 2.4 | \( \Phi(y^K) - \Phi(x^*) \) | \( \max \left\{ \frac{LR^2}{\varepsilon}, \frac{H\zeta_*}{\sqrt{n\varepsilon}}, \frac{1}{n} \left( \frac{\sigma R}{\varepsilon} \right)^{\frac{1}{\alpha-1}} \right\} \) | ✓ | ✓ | | As. 4 (\( \mu > 0 \)) | clipped-SGD (Sadiev et al., 2023) | \( \| x^K - x^* \|^2 \) | \( \max \left\{ \frac{L}{\mu}, \left( \frac{\sigma^2}{\mu^2\varepsilon} \right)^{\frac{1}{2(\alpha-1)}} \right\} \) | X | X | | | DPProx-clipped-SGD-shift Theorem 2.2 | \( \| x^K - x^* \|^2 \) | \( \max \left\{ \frac{L}{\mu}, \left( \frac{\sigma^2}{\mu^2\varepsilon} \right)^{\frac{1}{2(\alpha-1)}} \right\} \) | ✓ | ✓ | (1) All assumptions are made on the whole domain. (2) The authors additionally assume that for a chosen point \( \hat{x} \) from the domain and for \( \eta > 0 \) one can compute an estimate \( \hat{g} \) such that \( P[\| g - \nabla f(\hat{x}) \| > \eta\sigma] \leq \epsilon \). Such an estimate can be found using geometric median computed over \( O(\ln \epsilon^{-1}) \) samples (Minsker, 2015). (3) The authors assume that \( \nabla f(x^*) = 0 \), which is not true for general composite optimization. quasi-strongly convex case. We also generalize the proposed methods to the distributed case (DPProx-clipped-SGD-shift and DPProx-clipped-SSTM-shift) and prove that they benefit from parallelization. To the best of our knowledge, our results are the first showing linear speed-up under Assumption 1. • Methods with clipping of gradient differences for distributed composite VIPs. We also apply the proposed trick to the methods for variational inequalities. In particular, we propose DPProx-clipped-SGDA-shifts and DPProx-clipped-SEG-shifts and rigorously analyze their high-probability convergence. As in the minimization case, the proposed methods have provable benefits from parallelization. • Tight convergence rates. As a separate contribution, we highlight the tightness of our analysis: in the known special cases (\( \Psi \equiv 0 \) and/or \( n = 1 \)), the derived complexity bounds either recover or outperform previously known ones (see Table 1 and also Table 2 in the appendix). Moreover, in certain regimes, the results have optimal (up to logarithms) dependencies on \( \varepsilon \). This is achieved under quite general assumptions. 1.3 Closely Related Work We discuss closely related work here and defer additional discussion to Appendix A. High-probability bounds for unconstrained convex problems. Standard high-probability convergence results are obtained under the so-called light-tails assumption (sub-Gaussian noise) (Nemirovski et al., 2009; Juditsky et al., 2011; Ghadimi & Lan, 2012). The first work addressing this limitation is (Nazin et al., 2019), where the authors derive the first high-probability complexity bounds for the case of minimization on a bounded set under bounded variance assumption. In the unconstrained case, these results are extended and accelerated by Gorbunov et al. (2020a) for smooth convex and strongly convex minimization problems. Gorbunov et al. (2021) tightens them and generalizes to the case of problems with Hölder-continuous gradients and Gorbunov et al. (2022a) derives high-probability convergence rates in the case of VIPs. Sadiev et al. (2023) relaxes the assumption of bounded variance to Assumption 1 for all problem classes mentioned above, and the results under the same assumption are also derived for clipped-SGD (without acceleration) by Nguyen et al. (2023b) in the convex and non-convex cases. High-probability bounds for composite convex problems. Nazin et al. (2019) propose a truncated version of Mirror Descent for convex and strongly convex composite problems and prove non-accelerated rates of convergence under bounded variance and bounded domain assumptions. Accelerated results under bounded variance assumption for strongly convex composite problems are proven by Davis et al. (2021), who propose an approach based on robust distance estimation. Since this approach requires solving some auxiliary problem at each iteration of the method, the complexity bound from Davis et al. (2021) contains extra logarithmic factors independent of the confidence level. Finally, in their very recent work, Nguyen et al. (2023a) prove high-probability convergence for Clipped Stochastic Mirror Descent (Clipped-SMD) for convex composite problems. Moreover, the authors also propose Accelerated Clipped-SMD (Clipped-ASMD) and show that the algorithm is indeed accelerated but only under the additional assumption that \( \nabla f(x^*) = 0 \). 2 Main Results for Composite Distributed Minimization Problems In this section, we consider problem (1) and methods for it. Failure of the naïve approach. For simplicity, consider a non-stochastic case with strongly convex \( f(x) \), \( n = 1 \). The standard deterministic first-order method for solving problems like (1) is Proximal Gradient Descent (Prox-GD) (Combettes & Pesquet, 2011; Nesterov, 2013): \( x^{k+1} = \text{prox}_{\gamma \Psi}(x^k - \gamma \nabla f(x^k)) \). Due to the good interplay between the structure of the problem, properties of the proximal operator, and the structure of the method, Prox-GD has the same (linear) convergence rate as GD for minimization of \( f(x) \). One of the key reasons for that is that any solution \( x^* \) of problem (1) satisfies \( x^* = \text{prox}_{\gamma \Psi}(x^* - \gamma \nabla f(x^*)) \), i.e., the solutions of (1) are fixed points of Prox-GD (and vice versa), which is equivalent to \( -\nabla f(x^*) \in \partial \Psi(x^*) \), where \( \partial \Psi(x^*) \) is a subdifferential of \( \Psi \) at \( x^* \). However, if we apply gradient clipping to Prox-GD naively \[ x^{k+1} = \text{prox}_{\gamma \Psi}\left( x^k - \gamma \text{clip}(\nabla f(x^k), \lambda) \right), \] then the method loses a fixed point property if \( \| \nabla f(x^*) \| > \lambda \), because in this case, \( -\text{clip}(\nabla f(x^*), \lambda) \) does not necessarily belongs to \( \partial \Psi(x^*) \) and \( x^* \neq \text{prox}_{\gamma \Psi}(x^* - \gamma \text{clip}(\nabla f(x^*), \lambda)) \) in general. Therefore, for such \( \lambda \), one has to decrease the stepsize \( \gamma \) to achieve any accuracy of the solution. This approach slows down the convergence making it sublinear even without any stochasticity in the gradients. To avoid this issue, it is necessary to set \( \lambda \) large enough. This strategy works in the deterministic case but becomes problematic for a stochastic version of the method from (8): \[ x^{k+1} = \text{prox}_{\gamma \Psi}\left( x^k - \gamma \text{clip}(\nabla f_{\xi_k}(x^k), \lambda_k) \right), \] where \( \xi_k \) is sampled independently from previous iterations. The problem comes from the fact the existing analysis in the unconstrained case (which is a special case of the composite case) requires taking decreasing \( \lambda_k \) (Gorbunov et al., 2021; Sadiev et al., 2023) that contradicts the requirement that clipping level has to be large enough. Therefore, more fundamental algorithmic changes are needed. Non-implementable solution. Let us reformulate the issue: (i) to handle the heavy-tailed noise, we want to use decreasing clipping level \( \lambda_k \); (ii) but the method should also converge linearly without the noise, i.e., when \( \nabla f_{\xi_k}(x^k) = \mathbb{E}_{\xi_k}[\nabla f_{\xi_k}(x^k)] = \nabla f(x^k) \). In other words, the expectation of the vector that is clipped in the method should converge to zero with the same rate as \( \lambda_k \). The method should converge, i.e., with high probability, we should have \( \nabla f(x^k) \to \nabla f(x^*) \). These observations lead us to the following purely theoretical algorithm that we call Prox-clipped-SGD-star: \[ x^{k+1} = \text{prox}_{\gamma \Psi}\left( x^k - \gamma \tilde{g}^k \right), \quad \text{where} \quad \tilde{g}^k = \nabla f(x^*) + \text{clip}(\nabla f_{\xi_k}(x^k) - \nabla f(x^*), \lambda_k). \] The method is non-implementable since \( \nabla f(x^*) \) is unknown in advance. Nevertheless, as we explain in the next subsection, the method is useful in designing and analyzing implementable versions. The following theorem gives the complexity of Prox-clipped-SGD-star. --- 2The idea behind and the name of this method is inspired by SGD-star proposed by Gorbunov et al. (2020b); Hanzely & Richtárik (2019). Theorem 2.1. Let \( n = 1 \) and Assumptions 1, 2, and 4 with \( \mu > 0 \) hold for \( Q = B_{2R}(x^*) \), \( R \geq \|x^0 - x^*\| \), for some \( x^* \in \arg\min_{x \in \mathbb{R}^d} \Phi(x) \). Assume that \( K \geq 1 \), \( \beta \in (0, 1) \), \( A = \ln \frac{4(K+1)}{\beta} \), \[ 0 < \gamma = O \left( \min \left\{ \frac{1}{LA}, \frac{\ln(B_K)}{\mu(K+1)} \right\} \right), \quad B_K = \Theta \left( \max \left\{ 2, \frac{(K+1)^{2(\alpha-1)/\alpha} \mu^2 R^2}{\sigma^2 A^{2(\alpha-1)/\alpha} \ln^2(B_K)} \right\} \right), \] \[ \lambda_k = \Theta \left( \frac{\exp(-\gamma \mu (1 + k/2)) R}{\gamma A} \right). \] Then to guarantee \( \|x^K - x^*\|^2 \leq \varepsilon \) with probability \( \geq 1 - \beta \) Prox-clipped-SGD-star requires \[ \tilde{O} \left( \max \left\{ \frac{L}{\mu}, \left( \frac{\sigma^2}{\mu^2 \varepsilon} \right)^{\frac{1}{2(\alpha-1)}} \right\} \right) \text{ iterations/oracle calls}. \] (11) Sketch of the proof. Following Gorbunov et al. (2020a); Sadiev et al. (2023), we prove by induction\(^4\) that \( \|x^k - x^*\|^2 \leq 2 \exp(-\gamma \mu k) R^2 \) with high probability. This and \( L \)-smoothness imply that \( \|\nabla f(x^k) - \nabla f(x^*)\| \sim \exp(-\gamma \mu k/2) \) and \( \|\nabla f(x^k) - \nabla f(x^*)\| \leq \lambda_k/2 \) with high probability. These facts allow us to properly clip the heavy-tailed noise without sacrificing the convergence rate. See the complete formulation of Theorem 2.1 and the full proof in Appendix D. The above complexity bound for Prox-clipped-SGD-star coincides with the known one for clipped-SGD for the unconstrained problems under the same assumptions (Sadiev et al., 2023) – similarly as the complexity of Prox-GD coincides with the complexity of GD for unconstrained smooth problems. Prox-clipped-SGD-shift. As mentioned before, the key limitation of Prox-clipped-SGD-star is that it explicitly uses shift \( \nabla f(x^*) \), which is not known in advance. Therefore, guided by the literature on variance reduction and communication compression (Gorbunov et al., 2020b; Gower et al., 2020; Mishchenko et al., 2019), it is natural to approximate \( \nabla f(x^*) \) via shifts \( h^k \). This leads us to a new method called Prox-clipped-SGD-shift: as before \( x^{k+1} = \text{prox}_{\gamma \Psi}(x^k - \gamma \tilde{g}^k) \) but now \[ \tilde{g}^k = h^k + \hat{\Delta}^k, \quad h^{k+1} = h^k + \nu \hat{\Delta}^k, \quad \hat{\Delta}^k = \text{clip} \left( \nabla f_{\xi^k}(x^k) - h^k, \lambda_k \right), \] (12) where \( \nu > 0 \) is a stepsize for learning shifts. Similar shifts are proposed by Mishchenko et al. (2019) in the context of distributed optimization with communication compression. Since Prox-clipped-SGD-shift is a special case of its distributed variant, we continue our discussion with the distributed version of the method. Distributed Prox-clipped-SGD-shift. We propose a generalization of Prox-clipped-SGD-shift to the distributed case (2) called Distributed Prox-clipped-SGD-shift (DProx-clipped-SGD-shift): \[ x^{k+1} = \text{prox}_{\gamma \Psi}(x^k - \gamma \tilde{g}^k), \quad \text{where } \tilde{g}^k = \frac{1}{n} \sum_{i=1}^n \tilde{g}_i^k, \quad \tilde{g}_i^k = h_i^k + \hat{\Delta}_i^k, \] (13) \[ h_i^{k+1} = h_i^k + \nu \hat{\Delta}_i^k, \quad \hat{\Delta}_i^k = \text{clip} \left( \nabla f_{\xi_i^k}(x^k) - h_i^k, \lambda_k \right), \] (14) where \( \xi_1^k, \ldots, \xi_n^k \) are sampled independently from each other and previous steps. In this method, worker \( i \) updates the shift \( h_i^k \) and sends clipped vector \( \hat{\Delta}_i^k \) to the server. Since \( \tilde{g}^k = h^k + \frac{1}{n} \sum_{i=1}^n \hat{\Delta}_i^k \) and \( h^{k+1} = h^k + \frac{\nu}{n} \sum_{i=1}^n \hat{\Delta}_i^k \), where \( h^k = \frac{1}{n} \sum_{i=1}^n h_i^k \), workers do not need to send \( h_i^k \) to the server for \( k > 0 \). We notice that even when \( \Psi \equiv 0 \), i.e., the problem is unconstrained, individual gradients \( \{\nabla f_i(x^*)\}_{i \in [n]} \) of the clients’ function at the solution of problem (1) are not necessary zero, though their sum equals to zero. However, if applied without any shifts to the local (stochastic) gradients, then, similarly to the case of non-distributed Prox-GD (8), the clipping operation also breaks the fixed point property, since \( \frac{1}{n} \sum_{i=1}^n \text{clip}(\nabla f_i(x^*), \lambda) \neq 0 \) for small values of \( \lambda \). This highlights the importance of the shifts for distributed unconstrained case. For the proposed method, we derive the following result. \(^3\)If all of our results, one can use any solution \( x^* \), e.g., one can take \( x^* \) being a projection of \( x^* \) on the solution set. \(^4\)We use the induction to apply Bernstein’s inequality for the estimation of the sums appearing due to the stochasticity of the gradients. We refer to Section 3 for the details. Theorem 2.2 (Convergence of DProx-clipped-SGD-shift: quasi-strongly convex case). Let \( K \geq 1, \beta \in (0, 1), A = \ln \frac{48n(K+1)}{\beta} \). Let Assumptions 1, 2, and 4 with \( \mu > 0 \) hold for \( Q = B_{3n\sqrt{2}R}(x^*) \), where \( R \geq \|x^0 - x^*\|^2 \). Assume that \( \zeta_* = \sqrt{\frac{1}{n} \sum_{i=1}^{n} \|\nabla f_i(x^*)\|^2} \), \[ \nu = \Theta \left( \frac{1}{A} \right), \quad 0 < \gamma = O \left( \min \left\{ \frac{1}{LA}, \frac{\sqrt{n}R}{A\zeta_*}, \frac{\ln(B_K)}{\mu(K+1)} \right\} \right), \] \[ B_K = \Theta \left( \max \left\{ 2, \frac{(K+1)^{2(\alpha-1)/\alpha} \mu^2 n^{2(\alpha-1)/\alpha} R^2}{\sigma^2 A^{2(\alpha-1)/\alpha} \ln^2(B_K)} \right\} \right), \quad \lambda_k = \Theta \left( \frac{n \exp(-\gamma \mu(1+k/2)) R}{\gamma A} \right). \] Then to guarantee \( \|x^K - x^*\|^2 \leq \varepsilon \) with probability \( \geq 1 - \beta \) DProx-clipped-SGD-shift requires \[ \tilde{O} \left( \max \left\{ \frac{L}{\mu}, \frac{\zeta_*}{\sqrt{n\mu R}}, \frac{1}{n} \left( \frac{\sigma^2}{\mu^2 \varepsilon} \right)^{\frac{\alpha}{2(\alpha-1)}} \right\} \right) \text{ iterations/oracle calls per worker.} \] Sketch of the proof. The proof follows similar steps to the proof of Theorem 2.1 up the change of the Lyapunov function: by induction, we prove that \( V_k \leq 2 \exp(-\gamma \mu k)V \) with high probability, where \[ V_k = \|x^k - x^*\|^2 + \frac{C^2 \sigma^2 A^2}{n} \sum_{i=1}^{n} \|h^k_i - \nabla f_i(x^*)\|^2. \] The choice of the Lyapunov function reflects the importance of the “quality” of shifts \( \{h^k_i\}_{i \in [n]} \), i.e., their proximity to \( \{\nabla f_i(x^*)\}_{i \in [n]} \). Moreover, we increase the clipping level \( n \) times to balance the bias and variance of \( g^k \); see Appendix B. This allows us to reduce the last term in the complexity bound \( n \) times. See the complete formulation of Theorem 2.2 and the full proof in Appendix E. The next theorem gives the convergence result in the convex case. Theorem 2.3 (Convergence of DProx-clipped-SGD-shift: convex case). Let \( K \geq 1, \beta \in (0, 1), A = \ln \frac{48n(K+1)}{\beta} \). Let Assumptions 1, 2, and 3 with \( \mu = 0 \) hold for \( Q = B_{\sqrt{2R}}(x^*) \), where \( R \geq \|x^0 - x^*\| \). Assume that \( \nu = 0, \zeta_* = \sqrt{\frac{1}{n} \sum_{i=1}^{n} \|\nabla f_i(x^*)\|^2} \), \[ 0 < \gamma = O \left( \min \left\{ \frac{1}{LA}, \frac{\sqrt{n}R}{A\zeta_*}, \frac{n^{(\alpha-1)/\alpha} R}{\sigma K^{1/\alpha} A^{(\alpha-1)/\alpha}} \right\} \right), \quad \lambda_k = \lambda = \Theta \left( \frac{nR}{\gamma A} \right). \] Then to guarantee \( \Phi(\bar{x}^K) - \Phi(x^*) \leq \varepsilon \) for \( \bar{x}^K = \frac{1}{K+1} \sum_{k=0}^{K} x^k \) with probability \( \geq 1 - \beta \) DProx-clipped-SGD-shift requires \[ \tilde{O} \left( \max \left\{ \frac{LR^2}{\varepsilon}, \frac{R\zeta_*}{\sqrt{n\varepsilon}}, \frac{1}{n} \left( \frac{\sigma R}{\varepsilon} \right)^{\frac{\alpha}{2(\alpha-1)}} \right\} \right) \text{ iterations/oracle calls per worker.} \] Discussion of the results for DProx-clipped-SGD-shift. Up to the difference between \( V \) and \( \|x^0 - x^*\|^2 \), in the single-node case, the derived results coincide with ones known for clipped-SGD in the unconstrained case (Sadiev et al., 2023). In the composite non-distributed case (\( n = 1 \)), the result of Theorem 2.2 is the first known of its type, and Theorem 2.3 recovers (up to logarithmic factors) the result from (Nguyen et al., 2023a) for a version of Stochastic Mirror Descent with gradient clipping (Clipped-SMD), see Table 1. Indeed, parameter \( \hat{R}^2 = R \left( 3R + L^{-1}(2\eta \sigma + \|\nabla f(x^0)\|) \right) \) for some \( \eta > 0 \) from the result by Nguyen et al. (2023a) equals \( \Theta(\Theta(R^2 + R\zeta_/L)) \), when \( \eta \) is sufficiently small (otherwise \( \hat{R} \) can be worse than \( \Theta(R^2 + R\zeta_/L) \)), which can be seen from the following inequalities following smoothness: \( \|\nabla f(x^0)\| \leq \|\nabla f(x^*)\| + \|\nabla f(x^0) - \nabla f(x^*)\| \leq \|\nabla f(x^*)\| + L\|x^0 - x^*\| \) and \( \|\nabla f(x^*)\| \leq \|\nabla f(x^0)\| + \|\nabla f(x^0) - \nabla f(x^*)\| \leq \|\nabla f(x^0)\| + L\|x^0 - x^*\| \). Since in this work we do not focus on the logarithmic factors, we do not show them in the main text and provide the complete expressions in the appendix. Nguyen et al. (2023a) has better dependencies on the parameters under logarithms than our results. We conjecture that adjusting the proof technique from (Nguyen et al., 2023a) one can improve the logarithmic factors in our results as well. It is worth mentioning that shifts are not needed in the convex case because the method does not have fast enough convergence, which makes it work with a constant clipping level, i.e., the method in the convex case requires less tight gradient estimates and is more robust to the bias than in strongly convex. In the quasi-strongly convex case, the shifts’ stepsize is chosen as \( \nu \sim \Theta(1/\lambda) \) and it does not explicitly affect the rate since \( \gamma \mu = \Theta(1/\lambda) \), see the details in Section 3 and Appendix E. Next, as expected for a distributed method, the terms in the complexity bounds related to the noise improve with the growth of \( n \). More precisely, the terms depending on the noise level \( \sigma \) are proportional to \( 1/n \), i.e., our results show so-called linear speed-up in the complexity – a desirable feature for a stochastic distributed method. This aspect highlights the benefits of parallelization. To the best of our knowledge, the results for the distributed methods proposed in our work are the only existing ones under Assumption 1 (even if we take into account the in-expectation convergence results). In the special case of \( \alpha = 2 \), our results match (up to logarithmic factors) the SOTA ones from (Gorbunov et al., 2021) since parallelization with linear speed-up follows for free under the bounded variance assumption, if the clipping is applied after averaging as it should be in the parallelized version of methods from (Gorbunov et al., 2021) to keep the analysis from (Gorbunov et al., 2021) unchanged. Indeed, when \( \{ \nabla f_{\xi_i}(x) \}_{i \in [n]} \) are independent stochastic gradients satisfying Assumption 1 with parameters \( \sigma > 0 \) and \( \alpha = 2 \), then \( \frac{1}{n} \sum_{i \in [n]} \nabla f_{\xi_i}(x) \) also satisfies Assumption 1 with parameters \( \sigma/\sqrt{n} \) and \( \alpha = 2 \). However, when \( \alpha < 2 \) achieving linear speed-up is not that straightforward. If \( \{ \nabla f_{\xi_i}(x) \}_{i \in [n]} \) are independent stochastic gradients satisfying Assumption 1 with parameters \( \sigma > 0 \) and \( \alpha < 2 \), then the existing results (Wang et al., 2021, Lemma 7) give a weaker guarantee: \( \frac{1}{n} \sum_{i \in [n]} \nabla f_{\xi_i}(x) \) satisfies Assumption 1 with parameters \( \frac{2^{2-\alpha}d^{\frac{1}{\alpha}-\frac{1}{2}}\sigma}{n^{\frac{1}{\alpha}}} \), which is dimension dependent, and the same \( \alpha \). Therefore, if one applies this result to the known ones from (Sadiev et al., 2023; Nguyen et al., 2023a), then the resulting complexity will have an extra factor of \( d^{\frac{1}{\alpha-1}-\frac{1}{2(\alpha-1)}} \) in the term that depends on \( \sigma \). For large-scale or even medium-scale heavy-tailed problems, this factor can be huge, e.g., when \( d = 1000 \) and \( \alpha = \frac{7}{6} \), this factor is \( 1000^{\frac{1}{6}-\frac{1}{12}} > 1000^{\frac{1}{12}} = 10^9 \). To avoid these issues, we apply gradient clipping on the workers and then average clipped vectors, not vice versa. This is also partially motivated by the popularity of gradient clipping for ensuring differential privacy guarantees (Abadi et al., 2016; Chen et al., 2020) in Federated Learning (Konečnỳ et al., 2016; Kairouz et al., 2021). Therefore, the proposed distributed methods can be useful for differential privacy as well, though we do not study this aspect in our work. **Acceleration.** Next, we propose a distributed version of clipped Stochastic Similar Triangles Method (Gorbunov et al., 2020a; Gasnikov & Nesterov, 2016) for composite problems (DProx-clipped-SSTM-shift): \[ x^{k+1} = \frac{A_k y^k + \alpha_{k+1} z^k}{A_{k+1}}, \quad z^{k+1} = \text{prox}_{\alpha_{k+1}\Psi}(z^k - \alpha_{k+1} \tilde{g}(x^{k+1})), \] \[ \tilde{g}(x^{k+1}) = \frac{1}{n} \sum_{i=1}^{n} \tilde{g}_i(x^{k+1}), \quad \tilde{g}_i(x^{k+1}) = h_i^k + \hat{\Delta}_i^k, \] \[ h_i^{k+1} = h_i^k + \nu_k \hat{\Delta}_i^k, \quad \hat{\Delta}_i^k = \text{clip}\left( \nabla f_{\xi_i}(x^{k+1}) - h_i^k, \lambda_k \right), \] \[ y^{k+1} = \frac{A_k y^k + \alpha_{k+1} z^{k+1}}{A_{k+1}} \] where \( \xi_1^k, \ldots, \xi_n^k \) are sampled independently from each other and previous steps. For the proposed method, we derive the following result. **Theorem 2.4 (Convergence of DProx-clipped-SSTM-shift).** Let Assumptions 1, 2, and 3 with \( \mu = 0 \) hold for \( Q = B_5\sqrt{2nR}(x^*) \), where \( R \geq \| x^0 - x^* \|^2 \). Let \( \zeta_* = \sqrt{\frac{1}{n} \sum_{i=1}^{n} \| \nabla f_i(x^*) \|^2} \), \( C = \Theta(A/\sqrt{n}) \), \( K_0 = \Theta(A^2) \), where \( K \geq 1 \), \( \beta \in (0, 1) \), \( A = \ln \frac{10nK}{\beta} \). Assume that \[ \nu_k = \begin{cases} \frac{2k+5}{(k+3)^2}, & \text{if } k > K_0, \\ \frac{(k+2)^2}{C^2(K_0+2)^2n}, & \text{if } k \leq K_0, \end{cases} \] \[ a = \Theta \left( \max \left\{ 2, \frac{A^4}{n}, \frac{A^3 \zeta_*}{L\sqrt{nR}}, \frac{\sigma K^{(\alpha+1)/\alpha} A^{(\alpha-1)/\alpha}}{LRn^{1/\alpha}} \right\} \right), \] \[ \lambda_k = \Theta \left( \frac{nR}{\alpha_{k+1} A} \right). \] Then to guarantee \( \Phi(y^K) - \Phi(x^*) \leq \varepsilon \) with probability \( \geq 1 - \beta \), DProx-clipped-SSTM-shift requires \[ \tilde{O} \left( \max \left\{ \sqrt{\frac{LR^2}{\varepsilon}}, \sqrt{\frac{R \zeta_n}{\sqrt{n} \varepsilon}}, \frac{1}{n} \left( \frac{\sigma R}{\varepsilon} \right)^{\frac{\alpha}{\alpha-1}} \right\} \right) \text{ iterations/oracle calls per worker.} \] Sketch of the proof. The proof of this result resembles the proof for clipped-SSTM from (Sadiev et al., 2023) but has some noticeable differences. In addition to handling the extra technical challenges appearing due to the composite structure (e.g., one cannot apply some useful formulas like \( z^{k+1} = \alpha_{k+1} g(x^{k+1}) \) that hold in the unconstrained case), we use a non-standard potential function \( M_k \) defined as \( M_k = \| z^k - x^* \|^2 + \left( C^2 \alpha_{K_0+1} / n \right) \sum_{i=1}^n \| h_i^k - \nabla f_i(x^*) \|^2 \) for \( k \leq K_0 \) and \( M_k = \| z^k - x^* \|^2 + \left( C^2 \alpha_{K_0+1} / n \right) \sum_{i=1}^n \| h_i^k - \nabla f_i(x^*) \|^2 \) for \( k > K_0 \). We elaborate on this and provide the complete proof in Appendix F. When \( n = 1 \), the derived result has optimal dependence on \( \varepsilon \) (up to logarithmic factors) (Nemirovskij & Yudin, 1983; Zhang et al., 2020b). In contrast to the result from (Nguyen et al., 2023a), we do not assume that \( \nabla f(x^*) = 0 \). Moreover, as DProx-clipped-SGD-shift, DProx-clipped-SSTM-shift benefits from parallelization since the second term in (21) is proportional \( 1/n \). When \( n \) is sufficiently large, the effect of acceleration can become significant even for large \( \sigma \). In Appendix F.2, we also provide the convergence results for the restarted version of DProx-clipped-SSTM-shift assuming additionally that \( f \) is strongly convex and one can compute starting shifts \( h_i^0 \) as \( \nabla f_i(x^0) \). ### 3 ON THE PROOFS STRUCTURE In this section, we elaborate on the proofs structure of our results and highlight additional challenges appearing due to the presence of the composite term and distributed nature of the methods. The proof of each result consists of two parts: optimization/descent lemma and the analysis of the sums appearing due to the stochasticity and biasedness of the updates (due to the clipping). In the first part, we usually follow some standard analysis of corresponding deterministic method without clipping and separate the stochastic part from the deterministic one (though for DProx-clipped-SSTM-shift we use quite non-standard Lyapunov function, which can be interesting on its own). For example, in the analysis of DProx-clipped-SGD-shift under Assumption 4, we prove the following inequality: \[ V_{K+1} \leq (1 - \gamma \mu)^{K+1} V_0 + \frac{2\gamma}{n} \sum_{k=0}^K \sum_{i=1}^n (1 - \gamma \mu)^{K-k} \langle x^k - x^* - \gamma (\nabla f(x^k) - \nabla f(x^*)) , \omega_{i,k} \rangle \\ + \frac{\gamma^2}{n^2} \sum_{k=0}^K \sum_{i=1}^n (1 - \gamma \mu)^{K-k} \|\omega_{i,k}\|^2 + \gamma^2 \sum_{k=0}^K (1 - \gamma \mu)^{K-k} \|\omega_k\|^2, \] where \( V_k = \| x^k - x^* \|^2 + \frac{C^2 \alpha_{K_0+1} A^2}{n} \sum_{i=1}^n \| h_i^k - \nabla f_i(x^*) \|^2 \) for some numerical constant \( C > 0 \) and vectors \( \omega_{i,k} = \nabla f_i(x^k) - \tilde{g}_i^k \) represent the discrepancy between the full gradients and their estimates. Moreover, to use this inequality for some \( K = T \geq 0 \) we need to show that \( \{ x^k \}_{k=0}^T \) belong to the set where the assumptions hold (in this particular case, to \( B_{3n\sqrt{2R}(x^*)} \)) with high probability. We do it always by induction. More precisely, we prove that \( \mathbb{P}\{ E_k \} \geq 1 - k\beta/(K+1) \) for the probability event \( E_k \) defined as follows: inequalities \( V_t \leq 4 \exp(-\gamma \mu t) R^2 \) and \( \left\| \frac{\gamma}{n} \sum_{i=1}^r \omega_{i,t-1} \right\| \leq \exp(-\gamma \mu (t-1)/2) \sqrt{R^2/2} \) hold for \( t = 0, 1, \ldots, k \) and \( r = 1, 2, \ldots, n \) simultaneously, where \( \omega_{i,t} = \mathbb{E}_{\xi_t} [\tilde{g}_i^t - \tilde{g}_i^t] \) and \( \mathbb{E}_{\xi_t} [\cdot] \) denotes an expectation w.r.t. \( \xi_t \). To prove this, we use Bernstein inequality for martingale difference (see Lemma B.1). However, to apply Bernstein inequality we need to circumvent multiple technical difficulties related to the estimation of the norm of the clipped vector (that involves derivations related to the shifts \( \{ h_i^k \}_{i \in [n]} \)), proper choice of the clipping level to control the bias and variance and achieve desired linear speed-up (see Lemma B.3 and the following discussion). Moreover, when \( n > 1 \) (distributed case), we also need to apply additional induction over clients to estimate sums like \( \circledast \) from (265). --- 5 In the appendix, we analyze this case in the generality of variational inequalities. Here we provide a simplified version for minimization. REFERENCES Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308–318, 2016. Ahmet Alacaoglu and Yura Malitsky. Stochastic variance reduction for variational inequality methods. In Conference on Learning Theory, pp. 778–816. PMLR, 2022. Amir Beck. First-order methods in optimization. SIAM, 2017. George Bennett. Probability inequalities for the sum of independent random variables. Journal of the American Statistical Association, 57(297):33–45, 1962. Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, and Nicolas Loizou. Stochastic gradient descent-ascent: Unified theory and new efficient methods. pp. 172–235, 2023. Xiangyi Chen, Steven Z Wu, and Mingyi Hong. Understanding gradient clipping in private sgd: A geometric perspective. Advances in Neural Information Processing Systems, 33:13773–13782, 2020. Patrick L Combettes and Jean-Christophe Pesquet. Proximal splitting methods in signal processing. Fixed-point algorithms for inverse problems in science and engineering, pp. 185–212, 2011. Ashok Cutkosky and Harsh Mehta. High-probability bounds for non-convex stochastic optimization with heavy tails. Advances in Neural Information Processing Systems, 34, 2021. Damek Davis, Dmitriy Drusvyatskiy, Lin Xiao, and Junyu Zhang. From low probability to high confidence in stochastic convex optimization. Journal of Machine Learning Research, 22(49):1–38, 2021. Kacha Dzhaparidze and JH Van Zanten. On bernstein-type inequalities for martingales. Stochastic processes and their applications, 93(1):109–117, 2001. David A Freedman et al. On tail probabilities for martingales. the Annals of Probability, 3(1):100–118, 1975. Alexander Gasnikov and Yuriii Nesterov. Universal fast gradient method for stochastic composit optimization problems. arXiv preprint arXiv:1604.05275, 2016. Saeed Ghadimi and Guanghui Lan. Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization i: A generic algorithmic framework. SIAM Journal on Optimization, 22(4):1469–1492, 2012. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016. Eduard Gorbunov, Marina Danilova, and Alexander Gasnikov. Stochastic optimization with heavy-tailed noise via accelerated gradient clipping. Advances in Neural Information Processing Systems, 33:15042–15053, 2020a. Eduard Gorbunov, Filip Hanzely, and Peter Richtárik. A unified theory of sgd: Variance reduction, sampling, quantization and coordinate descent. In International Conference on Artificial Intelligence and Statistics, pp. 680–690. PMLR, 2020b. Eduard Gorbunov, Marina Danilova, Innokentiy Shibaev, Pavel Dvurechensky, and Alexander Gasnikov. Near-optimal high probability complexity bounds for non-smooth stochastic optimization with heavy-tailed noise. arXiv preprint arXiv:2106.05958, 2021. Eduard Gorbunov, Marina Danilova, David Dobre, Pavel Dvurechenskii, Alexander Gasnikov, and Gauthier Gidel. Clipped stochastic methods for variational inequalities with heavy-tailed noise. Advances in Neural Information Processing Systems, 35:31319–31332, 2022a. Eduard Gorbunov, Nicolas Loizou, and Gauthier Gidel. Extragradient method: $O(1/\kappa)$ last-iterate convergence for monotone variational inequalities and connections with cocoercivity. In International Conference on Artificial Intelligence and Statistics, pp. 366–402. PMLR, 2022b.
OEDM8mzbsl
Is this information meant to be used by the user, or was it just used in the paper for illustrative purposes? If it’s the former, doesn’t it run into issues related to hallucination and the system potentially providing unreliable information?
Evaluating Multi-Agent Coordination Abilities in Large Language Models Anonymous authors Paper under double-blind review Abstract A pivotal aim in contemporary AI research is to develop agents proficient in multi-agent coordination, enabling effective collaboration with both humans and other systems. Large Language Models (LLMs), with their notable ability to understand, generate, and interpret language in a human-like manner, stand out as promising candidates for the development of such agents. In this study, we build and assess the effectiveness of agents crafted using LLMs in various coordination scenarios. We introduce the LLM-Coordination (LLM-Co) Framework, specifically designed to enable LLMs to play coordination games. With the LLM-Co framework, we conduct our evaluation with three game environments and organize the evaluation into five aspects: Theory of Mind, Situated Reasoning, Sustained Coordination, Robustness to Partners, and Explicit Assistance. First, the evaluation of the Theory of Mind and Situated Reasoning reveals the capabilities of LLM to infer the partner’s intention and reason actions accordingly. Then, the evaluation around Sustained Coordination and Robustness to Partners further showcases the ability of LLMs to coordinate with an unknown partner in complex long-horizon tasks, outperforming Reinforcement Learning baselines. Lastly, to test Explicit Assistance, which refers to the ability of an agent to offer help proactively, we introduce two novel layouts into the Overcooked-AI benchmark, examining if agents can prioritize helping their partners, sacrificing time that could have been spent on their tasks. This research underscores the promising capabilities of LLMs in sophisticated coordination environments and reveals the potential of LLMs in building strong real-world agents for multi-agent coordination. 1 Introduction Humans engage in various coordination tasks in their daily lives and work, including mundane activities like cooking and more important tasks like search and rescue. In order to assist humans with tedious or hazardous tasks, it is essential to create agents capable of coordinating with humans or other autonomous systems. Recently, agents based on Large Language Models have successfully demonstrated emergent problem-solving and task-completion capabilities in complex environments [Raman et al., 2022; Wang et al., 2023; Wu et al., 2023]. They have shown high-level reasoning abilities and hints of Theory of Mind abilities [Kosinski, 2023]. In this work, we intend to find out how well Large Language Models can reason to solve tasks that require multi-agent coordination. Effective coordination requires agents to be able to infer their partner’s next actions (Theory of Mind), reason about the inferred action in the context of their shared environment (Situated Reasoning), take actions and make adjustments to execute the plan over a long duration (Sustained Coordination) and be able to adjust to unseen partners (Robustness to Partners). Furthermore, we need agents to be capable of proactively providing Explicit Assistance to their partners during coordination tasks. In order to evaluate the multi-agent coordination abilities of LLMs, we adopt three different coordination games. The first game is Collab Escape, where two agents need to coordinate to escape from an adversary. The second is Collab Capture where two agents chase an adversary through a maze of rooms. The final game is the Overcooked [Carroll et al., 2019a], which requires two players to cook and deliver onion soups. To enable Large Language Models to understand and play these games we introduce the LLM-Coordination framework. The LLM-Co framework provides agents with contextual state information and feasible actions and interprets agent outputs for execution in real time. We will refer to agents using the LLM-Coordination framework as LLM-Co agents. In the evaluation, we first test the Theory of Mind (ToM) and Situated Reasoning abilities of LLMs, which are preliminary skills required for coordination. ToM allows models to infer the intentions and beliefs of others, while Situated Reasoning enables them to anchor these inferences in the contextual setting of the environment. We design the LLM-ToM-Reasoning Test Set, including independent scenarios from our multi-agent coordination environments. The LLM-ToM-Reasoning Test Set requires the LLMs to reason about their partner’s intention and the current state of the environment to provide the optimal next action. We compare four different LLMs (GPT-4, GPT-3.5-turbo, Vicuna-33B, and Vicuna-13B) [OpenAI (2023); Ouyang et al. (2022); Chiang et al. (2023)]. We observe that GPT-4 overwhelmingly outperforms the other LLMs, getting an almost human-level score. In order to evaluate sustained coordination abilities in LLM-Co agents, we use GPT-4 as the LLM of choice as it is the only candidate that provides acceptable ToM and Situated Reasoning skills. We compare the performance of LLM-Co Agents (w. GPT-4) with Reinforcement Learning (RL) based baselines, which are the gold standards for AI-AI gameplay. We also experiment with varying the partners in the Coordination Environment to proxy human agents to test the agent’s Robustness to Partners. We observe that LLM-Co agents perform better than or equal to the RL baseline in both AI-AI and AI-human proxy gameplay without any fine-tuning. Additionally, LLM agents have a further edge over RL methods due to their ability to fully explain the rationale behind their actions in free text. Finally, we study whether LLM-Co agents can proactively provide help to their partner (Explicit Assistance). We extend the existing layouts in the Overcooked-AI environment to involve a gate element that forces agents to assist their partners in order to complete deliveries. Through experiments on these new layouts, we discover that LLM-Co agents can determine the right strategy needed to help out their partners. However, they require a “helper directive,” which uses natural language to prompt the LLM to be attentive to situations where their partner may need such help. We show that LLM-Co agents are able to outperform MARL baselines on these new layouts as well. We summarize the key contributions of our work as follows: • We develop the LLM-Coordination Framework that equips Large Language Models with tools and contextual information allowing them to play long-horizon games and execute LLM-generated natural language actions in real-time. • We present the LLM-Reasoning test set which consists of scenarios from the three coordination games explicitly designed to test the Theory of Mind and Situated Reasoning abilities of Large Language Models. • Using GPT-4 (which performs best on the LLM-ToM-Reasoning test) as the LLM of choice, we perform evaluations for assessing sustained coordination. We show that LLM-Co agents outperform Reinforcement Learning baselines in comprehensive evaluations in the multi-turn Overcooked-AI environment. • We introduce two new layouts to the Overcooked-AI environment that require Large Language Models to provide Explicit Assistance to their partners. Through quantitative and qualitative evaluations, we show that LLM-Co Agents understand the common-payoff nature of the game and are able to figure out the right actions and reasoning required to assist their partners. 2 RELATED WORK 2.1 MULTI-AGENT COORDINATION In Game Theory, Pure Coordination games are situations where the payoff is commonly shared between both agents. In such situations, cooperating is the best strategy. Various benchmarks have been used to evaluate Multi-Agent Coordination abilities over the years [Lowe et al. (2017); Bard et al. (2020)]. In recent years, the Overcooked environment has emerged as a popular testbed for coordination experiments [Carroll et al. (2019a); Wu et al. (2021)]. Our research leverages the Overcooked-AI environment [Carroll et al. (2019b)]. The foundational work by [Carroll et al. (2019a)] emphasized the significance of incorporating human data for effective collaboration. Subsequent research has pivoted towards enabling self-play-trained agents to coordinate seamlessly with humans within this environment. These studies employ various techniques, including self-play with past agent checkpoints [Strouse et al., 2021], centralized population entropy objectives [Zhao et al., 2023], open-ended objectives using graph theory [Li et al., 2023], policy ensembles with context-aware mechanisms [Lou et al., 2023], and the incorporation of human biases as linear hidden rewards [Yu et al., 2023], to enhance the training and diversity of AI agents in different scenarios. Embodied environments usually set up in household environments have also been recently used to study multi-agent collaboration [Puig et al., 2021; Jain et al., 2020, 2019; Gan et al., 2021]. 2.2 Planning and Reasoning with Large Language Models Large Language Models (LLMs) have demonstrated remarkable capabilities of reasoning in natural language [OpenAI, 2023; Ouyang et al., 2022; Chiang et al., 2023]. These models have achieved state-of-the-art performance across a spectrum of NLP tasks, showcasing their proficiency at verbal reasoning. Strategies like Chain of thought prompting [Wei et al., 2022], which generates step-by-step free-text explanations before coming to conclusions have further boosted the reasoning capacities of LLMs. Approaches augmenting an LLM with memory, belief, and tools have shown to be useful in multi-step problem-solving [Park et al., 2023; Huang et al., 2022; Raman et al., 2022]. Isolated LLM agents have shown to be capable of life-long learning and task completion in open-domain survival games, outperforming existing SOTA Reinforcement Learning methods [Wu et al., 2023; Wang et al., 2023]. More recently, such LLM agents have been paired with rule-based low-level planners to execute tasks in embodied environments [Liang et al., 2022; Song et al., 2022; Zhang et al., 2023] demonstrated efficiency increase in collaborative embodied multi-agent setting and [Mandi et al., 2023] have shown the ability of collaborative manipulator motion planning using LLMs. Taking the planning and reasoning abilities of LLM a step further, We intend to perform a systematic evaluation of coordination abilities in Large Language Models in Common Payoff games. 3 Evaluation Environments 3.1 Collab Capture Figure 1: The CollabCapture game involves two agents, Alice (Blue) and Bob (Green), chasing a thief across multiple rooms. Some rooms are connected by doors, which can be controlled by buttons in different rooms. Collab Capture involves two agents trying to capture an adversary in a maze of interconnected rooms. The rooms are connected by doors, which can be controlled through access buttons that can be found in different rooms. The agent’s task is to capture the adversary in the least amount of time using effective strategies including cornering the adversary, disabling the adversary, or enabling their partners. 3.2 CollabEscape Based on the popular Video Game "Dead-by-Daylight", Collaborative Escape involves two agents trying to escape an adversary in a maze of interconnected rooms. They need to fix two generators located in randomly selected rooms to open an exit portal. The adversary tries to catch the agents, and the win condition is any one agent escaping. This game requires strategies like luring the adversary away from the partner, sacrificing for the partner’s safety, and manipulating the movement of the adversary. 3.3 Overcooked ![Figure 2](image) Figure 2: All layouts from the overcooked environment we use for our tests. The two agents Alice (Blue) and Bob (Green) need to collaborate to cook, plate, and deliver onion soups. From Left to Right: Cramped Room, Asymmetric Advantages, Forced Coordination, Coordination Ring, and Counter Circuit. In the Overcooked-AI environment [Carroll et al., 2019a], two agents—Alice (Blue) and Bob (Green)—collaborate to cook and deliver onion soups. Different environments feature varying numbers of onion dispensers ($o$), plate dispensers ($p$), cookers ($c$), delivery areas ($d$), and counters ($k$). Agents must load three onions into a cooker to start it, which takes 20 time steps to cook. Once done, an agent transfers the soup to a plate and delivers it. 3.4 Overcooked-Assist: Demonstrating Explicit Assistance in Overcooked ![Figure 3](image) Figure 3: Additional Layouts that require agents to explicitly help their partner complete a delivery. These new layouts utilize walls and gates to create situations requiring explicit assistance. The layouts in Overcooked-AI [Carroll et al., 2019a] are an excellent test for gauging the ability of participating agents to sync their actions with their partners. It requires agents to effectively navigate the layout and time their actions in response to their partners in order to increase efficiency. However, none of these environments elicit the need for agents to explicitly help out their partner sacrificing their own time. We intend to evaluate the LLM agent’s ability to make the choice to actively help its partner but also see if they can realize when the opportunity to help has arrived and take the right action to facilitate their partner. If the LLMs cannot make such a choice implicitly, we intend to see the effect of tuning the directives to bring about such a cooperative intention. To elicit situations that require one agent to drop their own cooking/delivery and help out their partners, we extend the Overcooked environment by introducing 2 new facilities (Gates, Walls) and 2 new layouts. 3.4.1 GATED DELIVERY Visualized in Figure 3, the Gated Delivery layout requires both agents to help out their partners during soup delivery. The two gates g0 and g1 make the delivery area inaccessible. Gates can be opened by an agent provided they are not holding anything in their hand. Once opened, a gate remains open for a very short time enough for an agent to move through it but not enough for an agent to open it in advance before picking up cooked soup for delivery. This necessitates an agent not holding cooked soup in their hand to go and open the gate for the delivery agent. The kitchen counters are replaced by walls to prevent the agents from taking the loophole of placing their soups temporarily on counters to open the gates. In this environment, both agents are equally placed, and they need to be acutely aware of their partner’s needs in order to complete even a single delivery. 3.4.2 LOCKED Visualized in Figure 3, the Locked environment is structurally similar to Soup Passing, except there is no shared counter to pass soup on. Instead, the agent in the left partition has to understand that their partner is locked behind a closed gate, holding a soup. In real-life scenarios, one collaborating partner might find themselves disadvantaged in a similar manner. In order to develop reliable assistive agents, the advantaged agent needs to understand the situation and make the choice to help out their partner since that provides a better common payoff. 4 LLM-Coordination Framework Figure 4: Visual summary of the LLM-Co framework. Our framework serves as the backbone for an individual agent, focusing on bringing out its coordination ability. The framework translates abstract game details into an LLM-compatible format and then utilizes the generated LLM output to take actions in the game world. The games mentioned in Section 3 are translated into textual objectives using the LLM-Coordination Framework. The details of the game along with the rules and the layout of the map are condensed into a short Game Description (G). Along with the game description, we also provide a set of Directives (D) that guide agent behavior. These descriptions are passed as initial prompts to the Large Language Model. At each turn, the LLM receives the current state description (D(S)) that is programmatically obtained from the environment, and the player states S. Since LLMs struggle with grid-based reasoning and navigation, we provide relative distances from the agent to each location of interest in the state description. Along with player-specific variables, other salient state variables are also included as natural language descriptions. Finally, an agent is provided its partner’s inventory and relative position to allow it to consider their intentions. The state information provided to the LLM is equivalent to what a Reinforcement Learning agent would receive in the form of vectors. The LLM operates at a medium-level action space which is made up of verb-based actions like "pick", "place", "move" etc. It is provided with a set of feasible actions $M_f$ to choose from to enable easier reasoning. The feasible action set is decided on the basis of player inventory and accessibility of locations. The LLM utilizes the information $\langle G, D_t, S, M_f \rangle$ to assess the situation and generates an action $m$ from the provided set $M_f$. We then use an Action Manager to interpret the action based on the verb used and the location mentioned. The Action Manager generates low-level actions needed to execute the medium-level action. In the following experiments, we will refer to LLM Agents that use the LLM-Coordination Framework as LLM-Co Agents. 5 EXPERIMENTS AND RESULTS In this section, we describe the experiments and results for the coordination ability of LLMs, with a focus on five aspects: Theory Of Mind, Situated Reasoning, Sustained Coordination, Robustness to Partners, and Explicit Assistance. 5.1 THEORY OF MIND AND SITUATED REASONING ![Figure 5: LLMs performance on the LLM-ToM-Reasoning test set. Partner action intent prediction accuracy shows the Theory Of Mind ability of LLMs under test and the optimal action reasoning accuracy infers the Situated Reasoning effect of LLMs under test. GPT-4 achieves the best performance among tested LLMs.] LLM-ToM-Reasoning test set With the LLM-Co frameworks, we propose an LLM-ToM-Reasoning test set, which is a suite of 18 scenarios posed with questions among all three games: Collaborative Capture, Collaborative Escape, and Overcooked. The scenarios in pure-text are formed by the outputs of the State Description and Feasible Action Generator from the LLM-Co frameworks. The test set only includes scenarios hand-picked to represent pivotal situations that require the agent under-test to first take their partner’s possible next actions into active consideration, reason about the current state, and adjust their actions that “indirectly” lead to the best possible outcome. The same questions are shared across the test set which asks to analyze the current state, infer the partner’s potential next action, and predict the optimal next action from the perspective of a player. We annotate the ground truth answers to the questions in the test set manually and ensure a 100% human success rate during the cross-validation process. GPT-4 outperforms the other LLMs in LLM-ToM-Reasoning test set We use the collected LLM-TOM-Reasoning test set to scrutinize the LLMs in the Theory Of Mind (ToM) and Situated Reasoning aspects, which respectively refer to the ability to understand the beliefs and intentions of other entities and the ability to contextualize this understanding within the environmental dynamics. to formulate appropriate responses. LLMs under-test are required to solve the questions in the LLM-TOM-Reasoning test set, where the output answers to questions are manually compared against the ground truth. First, we calculate the accuracy of the LLM predictions concerning their partner’s actions to indicate the ToM ability. Then, the accuracy of predictions for the next appropriate action and analysis of the current scenario shows the situated reasoning effectiveness. As the result shown in Figure 5, GPT-4 outperforms the other LLMs, GPT-3.5-turbo, Vicuna-33B, and Vicuna-13B, with only a marginal difference to human performance, indicating a strong potential in understanding and implementing continuous coordination tasks. 5.2 Sustained Coordination and Robustness to Partners Sustained coordination refers to the ability of agents to continuously collaborate and adapt their actions over extended periods. Robustness to Partners is about an agent’s ability to adjust and adapt to interacting with new or unseen partners. We use GPT-4 as the LLM of choice to test these aspects. The choice of LLM is dictated by the fact that only GPT-4 is able to display satisfactory reasoning that will be required consistently for Sustained Coordination. We evaluate the LLM-Co agent on 400 timesteps of gameplay in the Overcooked-AI environment. The evaluation metric used in Overcooked (Carroll et al., 2019a) is the sparse reward obtained when one whole delivery is completed by the agents. Each delivery wins the agents 20 points. We use Self Play with Proximal Policy Optimization (PPO) and Population-Based Training with PPO as the baselines for comparing AI-AI gameplay. For benchmarking AI-Human Proxy gameplay, we use the method of using a PPO agent trained with a human model (Behavior Cloning model trained on Human-Human gameplay data) established in Carroll et al. (2019a) and observed in the follow-up works Li et al. (2023); Zhao et al. (2023); Lou et al. (2023); Yu et al. (2023) approaching Zero Shot Coordination. | Agent Type | Cramped Rm. | Asymm. Adv. | Coord. Ring | Forced Coord. | Counter Circ. | |------------|-------------|-------------|-------------|---------------|---------------| | PPO$_{SP}$ | 198.8 ± 4.06 | 167.2 ± 3.63 | **190.8** ± 4.25 | 151.9 ± 3.28 | 122.3 ± 3.80 | | PBT | 216.9 ± 1.31 | 190.1 ± 8.64 | 173.8 ± 18.27 | 169.5 ± 10.09 | 140.1 ± 13.86 | | LLM-Co | **220** ± 0 | **280** ± 0 | 180 ± 0 | **200** ± 0 | **160** ± 0 | Table 1: Comparison of game play between self-play baselines (PPO, and PBT) and LLM-Co Agents. LLM-Co agents outperform RL methods on 4 out of 5 layouts, demonstrating highly effective reasoning under sustained coordination. The LLM-Co agent efficiently completes the Overcooked-AI challenge over a long horizon We pair two LLM-Co agents together to jointly coordinate and complete the cooking and delivery task in Overcooked-AI. This is analogous to testing agents trained with self-play methods being asked to jointly perform the task. We observe through visualizations of the gameplay that LLM-Co agents make effective use of all resources available to them to complete multiple deliveries effectively. In fact, without being trained or fine-tuned for the task, LLM-Co agents outperform or nearly match Self-Play baselines trained using Proximal Policy Optimization (Schulman et al., 2017) or Population-Based Training (Jaderberg et al., 2017) which are the gold standard for Multi-Agent Tasks on. Table 1 shows a numerical summary of the scores obtained by agents. These scores represent averages obtained from 100 runs across with standard deviation across 5 seeds for MARL agents. For LLM-Co agents, the score obtained by agents for a fixed game description and directives remains the same as the agent always chooses to take the same medium-level action for a given state and history. This outcome is noteworthy because it demonstrates that Language Learning Models (LLMs), specifically GPT-4 OpenAI (2023) in this case, can outperform RL agents at cooperative multi-agent tasks with minimal scaffolding. We observed that LLM Agents are capable of achieving sustained coordination, adjusting to their partners, and correcting their own actions consistently. The LLM-Co agent is robust to the choice of partner. It is highly likely that an agent is paired up with a biased or sub-optimal partner. It is known that self-play agents, when paired with humans, tend to struggle because their behavior diverges from what they consider to be the optimal strategy. | Agents | Cramped Rm. | Asymm. Adv. | Coord. Ring | Forced Coord. | Counter Circ. | |------------|-------------|-------------|-------------|---------------|---------------| | BC | $103.5 \pm 3.38$ | $136.5 \pm 7.00$ | $59.0 \pm 5.38$ | $20.5 \pm 4.33$ | $38.0 \pm 3.99$ | | PPO$_{BC}$ | $156.4 \pm 1.48$ | $72.6 \pm 19.44$ | $126.4 \pm 3.24$ | $58.9 \pm 2.98$ | $69.5 \pm 2.18$ | | LLM-Co | $160 \pm 0$ | $180 \pm 0$ | $160 \pm 0$ | $120 \pm 0$ | $140 \pm 0$ | Playing from swapped positions: | Agents | Cramped Rm. | Asymm. Adv. | Coord. Ring | Forced Coord. | Counter Circ. | |------------|-------------|-------------|-------------|---------------|---------------| | BC | $110.0 \pm 3.39$ | $137.5 \pm 8.40$ | $70.0 \pm 4.00$ | $31.0 \pm 5.00$ | $44.0 \pm 3.02$ | | PPO$_{BC}$ | $163.9 \pm 1.61$ | $178.8 \pm 2.65$ | $129.8 \pm 3.59$ | $76.9 \pm 2.29$ | $57.6 \pm 2.50$ | | LLM-Co | $180 \pm 0$ | $140 \pm 0$ | $160 \pm 0$ | $80 \pm 0$ | $120 \pm 0$ | Table 2: Comparison of AI-Human Proxy Game play. We compare Behavior Cloning Agents, PPO$_{BC}$ Agents with LLM-Co agents utilizing the GPT-4 LLM. The LLM-Co agents are able to outperform or match the performance of Reinforcement Learning models, indicating that LLM agents are robust to the choice of partner agents. Carroll et al. (2019a). The LLM-Co agent, on the other hand, does not face this issue. Their actions are based on verbal reasoning, and they adapt to the current situation rather than adhering to a determined policy. Consequently, they outperform Self-play-based methods trained with human data at AI-human proxy gameplay as shown in table 5.2. The LLM-Co agent generates explainable outputs through free-text Reinforcement Learning (RL-based) agents lack the ability to provide an underlying rationale for their actions, making it challenging to understand how their actions contribute to broader objectives. This understanding is crucial for the development of safer and more reliable agents, as well as for debugging when unexpected behaviors occur. The LLM-Co agent addresses this gap by generating medium-level actions and providing high-level reasoning for such a selection. This allows us to extract comprehensive insights into the decision-making processes under given conditions by examining the "analysis" generated by the language model during gameplay. Using insights derived from this explainability, we report qualitative case studies in Appendix B. 5.3 Explicit Assistance | Conditions | Locked | Gated Delivery | |--------------------|--------|----------------| | Without Helper Directive | 160 | 0 | | With Helper Directive | 240 | 180 | Table 3: Comparison of Gameplay in the Overcooked-Assistance Layouts with and without Helper Directive. The results indicate that the Large Language Model needs to be prompted to be aware of situations where their partner might need assistance in order to be effective in the Overcooked-Assistance layouts. Finally, we test the ability of the LLM-Co agent to provide explicit assistance to their partners in the new Overcooked layouts defined in [3,4] where proactive help is necessary to complete the task. The LLM-Co agent requires a helping directive to choose to help The LLM-Co agent, provided with the same prompt and directives as used in Overcooked-AI, struggles to recognize and help their partner agents in Locked and Gated Delivery environments. However, a simple directive informing the LLM-Co agent to "help their partners when the situation demands" makes them actively look for opportunities to help their partners. We see that agents tend to help partner agents during the time they are waiting for their own soup to be cooked by choosing the open gates for the waiting agent. While this is not the most efficient strategy, which would have been to always help out a waiting agent, it still points to the agent’s ability to explicitly help out during coordination. Table 5.3 shows scores obtained by agents over 400 time steps. Both Locked and Gated Delivery require agents to notice a partner in need and consequently benefit from adding a directive to help out their partner. | Agents | Locked | Gated Delivery | |--------|------------|----------------| | PPO$_{SP}$ | $132.83 \pm 7.31$ | $134.88 \pm 5.99$ | | PBT | $175.8 \pm 1.69$ | $178.6 \pm 9.76$ | | LLM-Co | $\textbf{220} \pm 0$ | $\textbf{180} \pm 0$ | Table 4: Comparison of Gameplay on Overcooked-Assistance Layouts between RL baselines and LLM Agents. The RL baselines being able to effectively solve the deliveries indicates that the environments are solvable through self-play training. The high scores achieved by LLM agents demonstrate that LLM agents are capable of reasoning for providing explicit assistance to their partners. The LLM-Co agent outperforms MARL methods at Overcooked-Co-op layouts Table 5.3 shows the performance of Self Play agents trained using PPO and PBT and compares it with the abilities of the LLM-Co agent provided with a helper directive. Since a reward is provided to both agents for delivery in Self Play training, we expected to see them gain the ability to open gates as it results in a reward after a short delay. In spite of this, the LLM-Co agent has the upper hand in their ability to deduce the right actions required to facilitate their partners. 6 CONCLUSION In this study, we evaluate the reasoning abilities of Large Language Models for achieving Multi-agent coordination. We evaluate LLMS across five aspects necessary for coordination through comprehensive evaluations in three different environments. We introduce The LLM-Co Framework for enabling Large Language Models to play multi-agent coordination games. We also curate the LLM-ToM-Reasoning dataset to assess the Theory of Mind inference and Situated Reasoning Abilities of Large Language Models. We show that LLM Agents are capable of Sustained Coordination in the Overcooked Environment and are Robust to the choice of partner. Finally, we introduce two new layouts to the Overcooked-AI environment and demonstrate the ability of Large Language Models to provide explicit assistance to their partners during coordination games. REFERENCES Nolan Bard, Jakob N. Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H. Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, Iain Dunning, Shibl Mourad, Hugo Larochelle, Marc G. Bellemare, and Michael Bowling. The hanabi challenge: A new frontier for ai research. Artificial Intelligence, 280:103216, 2020. ISSN 0004-3702. doi: https://doi.org/10.1016/j.artint.2019.103216. URL https://www.sciencedirect.com/science/article/pii/S0004370219300116 Micah Carroll, Rohin Shah, Mark K. Ho, Thomas L. Griffiths, Sanjit A. Seshia, Pieter Abbeel, and Anca Dragan. On the Utility of Learning about Humans for Human-AI Coordination. Curran Associates Inc., Red Hook, NY, USA, 2019a. Micah Carroll, Rohin Shah, Mark K. Ho, Thomas L. Griffiths, Sanjit A. Seshia, Pieter Abbeel, and Anca Dragan. overcooked.ai. https://github.com/HumanCompatibleAI/overcooked_ai/tree/master, 2019b. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/ Chuang Gan, Jeremy Schwartz, Seth Alter, Damian Mrowca, Martin Schrimpf, James Traer, Julian De Freitas, Jonas Kubilius, Abhishek Bhandwaldar, Nick Haber, Megumi Sano, Kuno Kim, Elias Wang, Michael Lingelbach, Aidan Curtis, Kevin Feigelis, Daniel M. Bear, Dan Gutfreund, David Cox, Antonio Torralba, James J. DiCarlo, Joshua B. Tenenbaum, Josh H. McDermott, and Daniel L. K. Yamins. Threedworld: A platform for interactive multi-modal physical simulation, 2021. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. arXiv preprint arXiv:2201.07207, 2022. Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M. Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, Chrisantha Fernando, and Koray Kavukcuoglu. Population based training of neural networks, 2017. Unnat Jain, Luca Weihs, Eric Kolve, Mohammad Rastegari, Svetlana Lazebnik, Ali Farhadi, Alexander G. Schwing, and Aniruddha Kembhavi. Two body problem: Collaborative visual task completion. In CVPR, 2019. first two authors contributed equally. Unnat Jain, Luca Weihs, Eric Kolve, Ali Farhadi, Svetlana Lazebnik, Aniruddha Kembhavi, and Alexander G. Schwing. A cordial sync: Going beyond marginal policies for multi-agent embodied tasks. In ECCV, 2020. first two authors contributed equally. Michal Kosinski. Theory of mind might have spontaneously emerged in large language models, 2023. Yang Li, Shao Zhang, Jichen Sun, Yali Du, Ying Wen, Xinbing Wang, and Wei Pan. Cooperative open-ended learning framework for zero-shot coordination. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 20470–20484. PMLR, 2023. URL https://proceedings.mlr.press/v202/li23au.html. Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control. In arXiv preprint arXiv:2209.07753, 2022. Xingzhou Lou, Jiaxian Guo, Junge Zhang, Jun Wang, Kaiqi Huang, and Yali Du. Pecan: Leveraging policy ensemble for context-aware zero-shot human-ai coordination. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’23, pp. 679–688, Richland, SC, 2023. International Foundation for Autonomous Agents and Multiagent Systems. ISBN 9781450394321. Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, pp. 6382–6393, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. Zhao Mandi, Shreeya Jain, and Shuran Song. Roco: Dialectic multi-robot collaboration with large language models, 2023. OpenAI. Gpt-4 technical report, 2023. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022. Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior, 2023.
xbXASfz8MD
in Proposition 4.1 you mention that $\psi$ and $\phi$ are inverse of each other. It is not correct, these functions are inverse to each other only on the input dataset. It raises the following question, how robust is the inverse property when you move away from the training dataset? How robust is the detected symmetry, when you move away from the training dataset? It reminds me of the following paper *Moskalev A. et al. On genuine invariance learning without weight-tying. Topological, Algebraic and Geometric Learning Workshops 2023. – PMLR, 2023*'
LATENT SPACE SYMMETRY DISCOVERY Anonymous authors Paper under double-blind review ABSTRACT Equivariant neural networks require explicit knowledge of the symmetry group. Automatic symmetry discovery methods aim to relax this constraint and learn invariance and equivariance from data. However, existing symmetry discovery methods are limited to linear symmetries in their search space and cannot handle the complexity of symmetries in real-world, often high-dimensional data. We propose a novel generative model, Latent LieGAN (LaLiGAN), which can discover nonlinear symmetries from data. It learns a mapping from data to a latent space where the symmetries become linear and simultaneously discovers symmetries in the latent space. Theoretically, we show that our method can express any nonlinear symmetry under certain conditions. Experimentally, our method can capture the intrinsic symmetry in high-dimensional observations, which results in a well-structured latent space that is useful for other downstream tasks. We demonstrate the use cases for LaLiGAN in improving equation discovery and long-term forecasting for various dynamical systems. 1 INTRODUCTION Symmetry plays an important role in the success of deep neural networks (Bronstein et al., 2021). Many equivariant networks have been developed to enforce various symmetries in data from images to graphs (Weller & Cesal, 2019; Cohen et al., 2019a; Zaheer et al., 2017; Finzi et al., 2020; Kondor & Trivedi, 2018; Cohen et al., 2019b; Finzi et al., 2021; Bekkers, 2019). A critical limitation of existing equivariant networks is that they require knowing the symmetry a priori. However, for complex real-world data, the underlying symmetries may be unknown or challenging to articulate through programming. Recent years have seen exciting attempts towards automatic symmetry discovery from data (Dehmamy et al., 2021; Moskalev et al., 2022; Benton et al., 2020; Zhou et al., 2021), but most of them search in only a limited space of symmetries, such as subsets of known groups or finite groups. LieGAN (Yang et al., 2023) can discover various types of symmetries, but its search space is still constrained to general linear groups. Successful discovery can only be achieved when observations are measured in an ideal coordinate system where linear symmetry is present. Unfortunately, real-world data often contain nonlinear symmetries, such as high-dimensional dynamics that evolve on a low-dimensional manifold (Champion et al., 2019), or 2D images of 3D objects (Garrido et al., 2023). Another line of study focuses on learning equivariant representations (Park et al., 2022; Yu et al., 2022; Dangovski et al., 2021; Quessard et al., 2020). These approaches learn a latent embedding space with particular symmetries. However, they still require prior knowledge about the symmetry in the latent space. Also, they often assume additional information about group transformation associated with each data point, which is not always available in practice. Figure 1: An example of SO(2) nonlinear group action $\pi'$ on $V = \mathbb{R}^2$ and its decomposition into an encoder $\phi$, a linear representation $\pi$ and a decoder $\psi$. Each trajectory is a group action orbit containing a random $v \in V$. In this work, we propose a novel framework, LaLiGAN, for discovering symmetries of nonlinear group actions. LaLiGAN decomposes the group transformations into nonlinear mappings between data space and latent space, and a linear group representation in the latent space. Figure 1 provides an example of such decomposition, where a nonlinear action of SO(2) on $V = \mathbb{R}^2$ corresponds to standard 2D rotation on latent vectors $z = \phi(v)$. Then, we utilize an existing symmetry discovery algorithm (Yang et al., 2023) with careful adaptations for discovering symmetries in the latent space. Normally, our framework has learnable group representation and does not require information about specific groups. However, when the symmetry group is known, it can also be used to learn equivariant representations without the information of group elements associated with each data sample. It is a highly flexible framework and can be applied to scenarios with scarce domain knowledge. The significance of latent space symmetry discovery is multi-fold. From the perspective of symmetry discovery, it further expands the search space of symmetries beyond linear group actions. For representation learning, learning a latent space in which symmetry becomes linear places a strong inductive bias on the structure of latent representations. Such a simple latent structure proves to be useful in various downstream tasks, such as equation discovery and long-term forecasting in temporal systems. Furthermore, compared to equivariant representation learning, as the symmetry is no longer fixed but learnable, our method can discover latent spaces with previously unknown symmetries. In summary, our main contributions include: - We develop LaLiGAN, a novel framework for discovering symmetries of nonlinear group actions. - We provide the theoretical guarantee that LaLiGAN has the expressive power to approximate any nonlinear symmetry under certain conditions. - Our method can discover well-structured latent spaces with interpretable symmetries in high-dimensional and nonlinear dynamical systems. - The discovered symmetry can be applied to equation discovery, leading to simpler equation forms and improved long-term prediction accuracy. 2 RELATED WORKS Automatic symmetry discovery. Automatic symmetry discovery aims to search and identify unknown symmetries in data. Current symmetry discovery techniques vary a lot in their search space for symmetries, such as learning discrete finite groups (Zhou et al., 2021; Karjol et al., 2023), learning group subsets that represent the extent of symmetry within known groups (Benton et al., 2020; Romero & Lohit, 2022; Chatzipantazis et al., 2021), and learning individual symmetry transformations on dataset distribution (Desai et al., 2022). Attempts have been made to discover general continuous symmetries based on Lie theory. For example, L-conv (Dehmmay et al., 2021) works with Lie algebra to approximate any group equivariant functions. LieGG (Moksalev et al., 2022) extracts symmetry from a learned network from its polarization matrix. LieGAN (Yang et al., 2023) proposes a general framework for discovering the symmetries of continuous Lie groups and discrete subgroups. These methods address general linear group symmetry in the data, which is the largest search space so far. Our work expands the search space to non-linear symmetries. Learning equivariant representation. Instead of working in the data space where symmetry transformations can be complicated, many works use autoencoders to learn a latent space with pre-specified symmetries (Hinton et al., 2011; Falorsi et al., 2018). Among recent works (Yu et al., 2022; Park et al., 2022), learn equivariant features that can be used for downstream prediction tasks. Shakerinava et al. (2022); Dangovski et al. (2021) use contrastive losses to learn equivariant representations in a self-supervised manner. Quessard et al. (2020); Marchetti et al. (2023) focus on learning disentangled representations that are highly interpretable. Winter et al. (2022); Wieser et al. (2020) split the latent space into group-invariant and equivariant subspaces. While the emphases of these works vary, the common assumption is that we have to know the symmetry group a priori. Many of them also assume additional information such as group element associated with each data point (Garrido et al., 2023) or paired samples under certain transformations (Shakerinava et al., 2022). Our goal is more ambitious: design a model to simultaneously learn symmetries and the corresponding equivariant representations in latent space with minimal supervision. Discovering governing equations. Latent space discovery of governing equations is first introduced in SINDy Autoencoder (Champion et al., 2019), which combines the sparse regression technique for discovering dynamics in Brunton et al. (2016) and an autoencoder network to explore coordinate transformations that lead to parsimonious equations. Several variants of this method have been developed to improve accuracy and robustness to noise (Kaheman et al., 2020; Messenger & Bortz, 2021; Fasel et al., 2022). However, due to the absence of physical constraints, their discovered equations may not respect some physical properties such as isotropy and energy conservation. We highlight this field as an important application of our symmetry discovery method, where enforcing symmetry can regularize the latent space and improve the performance of equation discovery models. 3 REPRESENTATION VS NONLINEAR GROUP ACTION Equivariant neural networks build on the notion of symmetry groups and their transformations on data. Given a vector space $V$, a group $G$ transforms $v \in V$ via a group action $\pi : G \times V \rightarrow V$ which maps the identity element $e$ to identity transformation, i.e. $\pi(e, v) = v$, and is compatible with group element composition, i.e. $\pi(g_1, \pi(g_2, v)) = \pi(g_1g_2, v)$. Many existing equivariant networks assume that the group acts linearly on the input vector space. Examples include E(2) symmetry acting on planar image signals (Weiler & Cesa, 2019), and SO(3) symmetry acting on spherical signals (Cohen et al., 2018). In these cases, the linear group action is called a group representation. The group representation is defined as a map $\rho : G \rightarrow GL(n)$ where $\rho(g) \in \mathbb{R}^{n \times n}$ is an invertible matrix that transforms any vector $v \in \mathbb{R}^n$ by matrix multiplication. Given the group representations on the input and the output spaces, a $G$-equivariant network $f : X \rightarrow Y$ needs to satisfy $\rho_Y(g)f(x) = f(\rho_X(g)x)$. A special case of equivariance is invariance, where the group action on the output space is trivial, i.e. $\rho_Y(g) = \text{id}$. Equivariant networks with such linear symmetry transformations have several limitations. It is not always possible to find a linear action of the group on the data, e.g. the action of SO(3) on 2D images of 3D objects. Also, we may not even know the symmetry group $G$ itself, so learning equivariant representations for known groups is also not an option. Our goal is to discover both the symmetry group and its nonlinear group action on the data. Concretely, given the input and output data space $X \subseteq \mathbb{R}^n$, $Y \subseteq \mathbb{R}^m$, and the data samples $(x_i, y_i) \in X \times Y$ with an underlying function $y = f(x)$, we want to find a group $G$ and its nonlinear actions $\pi'_X : G \times X \rightarrow X$ and $\pi'_Y : G \times Y \rightarrow Y$ such that $\pi'_Y(g, f(x)) = f(\pi'_X(g, x))$. We denote nonlinear group actions as $\pi'$ to distinguish them from group representations. In the following sections, we will also refer to group representations and nonlinear group actions as linear symmetries and nonlinear symmetries. We will use the theory of Lie groups to describe the continuous symmetry groups of data. We provide some preliminaries about Lie groups and their representations in Appendix B. 4 LaLiGAN: DISCOVERING NONLINEAR SYMMETRY TRANSFORMATIONS 4.1 DECOMPOSING THE NONLINEAR GROUP ACTION Our major goal is to learn a nonlinear action of a group $G$ on a vector space $V$: $\pi' : G \times V \rightarrow V$. While we can use a neural network $f_\theta$ to directly approximate this function, it does not guarantee the identity and compatibility conditions for a proper group action, i.e. $f_\theta(id, x) = x$ and $f_\theta(g_1, f_\theta(g_2, x)) = f_\theta(g_1g_2, x)$. Instead, we propose to decompose the nonlinear group action as nonlinear maps and a linear group representation. Concretely, we represent any nonlinear group action $\pi' : G \times V \rightarrow V$ as $$\pi'(g, \cdot) = \psi \circ \pi(g) \circ \phi,$$ where $\phi : V \rightarrow Z$ and $\psi : Z \rightarrow V$ are functions parametrized by neural networks, and $\pi(g) : G \rightarrow GL(k)$ is a group representation acting on the latent vector space $Z = \mathbb{R}^k$. We specify the dimensionality of $Z$ as a hyperparameter based on specific tasks. One can easily verify that Proposition 4.1. If $\phi$ and $\psi$ are inverse of each other, then $\pi'(g, \cdot) = \psi \circ \pi(g) \circ \phi$ is a valid group action that satisfies identity and compatibility axioms. Figure 2: Overview of the proposed LaLiGAN framework. The encoder maps the original observations to a latent space. The latent representation is transformed with the linear group action from the generator. The decoder reconstructs the inputs from original and transformed representations. The discriminator is trained to recognize the difference between the original and the transformed samples. In practice, we train the networks $\phi$ and $\psi$ with a reconstruction loss $l_{\text{recon}} = \mathbb{E}_v \|\psi(\phi(v)) - v\|^2$ to ensure they are the approximate inverse of each other. Intuitively, $\phi$ and $\psi$ form an autoencoder that maps between the input vector space and a latent space. Through the decomposition of the nonlinear group action, our method learns (1) the symmetry group on a latent space via its linear representation, and (2) a pair of inverse mappings between the input space and the symmetric latent space. We can provide theoretical guarantees for the expressivity of such a decomposition. The following theorem shows that our proposed decomposition and neural network parametrization can approximate nonlinear group actions under certain conditions. Detailed proof is deferred to Appendix C. **Theorem 4.2 (Universal Approximation of Nonlinear Group Action).** Let $G \leq \text{GL}(k; \mathbb{R})$ be a compact Lie group that acts smoothly, freely and properly on $V = \mathbb{R}^k$ via a continuous group action $\pi': G \times V \to V$. The group action, restricted to any bounded subset of the group, can be approximated by the decomposition $\pi'(g, \cdot) \approx \psi \circ \pi(q) \circ \phi$ if it admits a simply connected orbit space $V/G$, where $\psi$ and $\phi$ are fixed arbitrary-width neural networks with one hidden layer, and $\pi$ is a linear group representation. ### 4.2 Symmetry Discovery Now that we have constructed the nonlinear group action, we proceed to discover the symmetry group $G$. We restrict our search space to $G \leq \text{GL}(k)$, where $k$ is the latent dimensionality defined in the previous decomposition. In this way, we can represent any group element $g$ by its standard representation $\pi(g) \in \mathbb{R}^{k \times k}$. We expect this search space of the general linear group to be big enough to cover the types of symmetries in most real-world systems. We follow the approach in Yang et al. (2023) to discover the linear symmetry with generative adversarial training. Concretely, a symmetry generator learns a Lie algebra basis $\{L_i \in \mathbb{R}^{k \times k}\}$ and generates the standard representations of group elements by sampling the linear combination coefficients $w_i \in \mathbb{R}$ for the Lie algebra basis: $$w_i \sim \gamma(w), \quad \pi(g) = \exp \left[ \sum_i w_i L_i \right]$$ (2) where $\gamma$ is a distribution (e.g. Gaussian) for the coefficients and $\exp$ denotes the matrix exponential. As the Lie algebra basis $\{L_i\}$ uniquely determines the structure of the Lie group, we can learn the symmetry group by learning these $L_i$ via standard gradient-based optimization techniques. Then, the symmetry generator introduced in (2) samples random group elements that transform the data points $v_i = (x_i, y_i)$. The discriminator is trained to distinguish the original “real” data and the transformed “fake” data. The generator and the discriminator are trained adversarially so that the generator learns to produce group elements that preserve the data distribution while transforming each data point. The group learned by the generator is then considered the discovered symmetry of the data. Figure 2 shows the overall pipeline of our method. We term our method Latent LieGAN (LaLiGAN), as we learn the Lie group representations on a latent space. A key difference of our method is the nonlinearity of the group action on data, which is achieved through the decomposition in [19]. Besides, we use the latent representations as the discriminator input. The latent vectors before the group transformations are the “real” samples, and those after the transformations are “fake”. Optionally, we also concatenate each latent vector with its reconstruction in observation space as the discriminator input, which is shown to accelerate convergence. In the most general form, our training objective is formulated as \[ l_{\text{total}} = w_{\text{GAN}} \cdot l_{\text{GAN}} + w_{\text{recon}} \cdot l_{\text{recon}}, \quad l_{\text{recon}} = \mathbb{E}_v \left[ \| (\psi \circ \phi)(v) - v \|^2 \right], \] \[ l_{\text{GAN}} = \mathbb{E}_{v,g} \left[ \log D(\phi(v), (\psi \circ \phi)(v)) + \log(1 - D((\pi(g) \circ \phi)(v), (\psi \circ \pi(g) \circ \phi)(v))) \right] \] where \(D\) is the discriminator, \(\pi(g) = \exp(w^T L_i)\) is the representation of the group element sampled from the generator, and \(\phi\) and \(\psi\) are neural networks that compose the nonlinear group action together with \(\pi(g)\). The learnable components include \(D, L_i, \phi\) and \(\psi\), which are optimized under the joint objective \(l_{\text{total}}\). The loss weighting coefficients \(w_{\text{GAN}}\) and \(w_{\text{recon}}\) are selected based on specific tasks. 4.3 Structuring the Latent Space Disentangled representation. Latent space representations may capture different aspects of the observations. Consider an image of \(N\) 3D objects as an example. A possible latent representation consists of the orientation of each object \(r_o \in \mathbb{R}^{3N}\), the camera perspective \(r_c \in \mathbb{R}^3\), light intensity \(i \in \mathbb{R}^+\), etc. Each component can be transformed by a separate group action, independent of each other. For these scenarios, we provide the option to specify how the latent space is decomposed as independent subspaces, i.e. \(Z = \oplus_{i=1}^N Z_i\), each of which is acted on by a symmetry group \(G_i\). This avoids searching in the unnecessarily large space of group actions with no nontrivial invariant subspace. This aligns with the notion of disentangled representation in Higgins et al. (2018). Regularizing the latent structure. The latent space produced by an encoder network can be largely arbitrary, leading to fallacious symmetry or no symmetry at all. We observe some failure modes caused by undesirable latent space structures and propose some regularization methods. First, the latent representations tend to collapse to a low-dimensional subspace where nontrivially parametrized group representations can act as identity. Such a fallacious symmetry provides an easy workaround for the symmetry generator. For example, this happens in Figure 3a where the transformations generated by \(L = [2, -2; -1, 1] \in \mathbb{R}^{2 \times 2}\) leave the latent representations in a 1D subspace approximately unchanged. This is undesirable because we want the symmetry generator to learn nontrivial transformations. In practice, we use orthogonal parametrization in the final linear layer of the encoder to enforce a different output in each dimension. Another failure mode occurs when the latent representations are not centered at the origin. The linear group representation \(v \mapsto \pi(g)v\) implicitly assumes that the vector space is centered at the origin and cannot describe the symmetry otherwise. Figure 3b provides an example of a circular latent space centered at \((1, 1)\). Directly applying the SO(2) transformations result in a different distribution. We observe that the encoder struggles to learn to center the latent representations at the origin. Therefore, we enforce this property by normalizing each batch of data to have zero mean before applying the transformations from the symmetry generator. Figure 3: Potential failure modes in latent space symmetry discovery. (a) Fallacious symmetry in low-dimensional subspace. (b) Absence of symmetry in a biased latent space. 4.4 Applications of Latent Symmetry Discovery Learning equivariant representation. Learning equivariant representation can be viewed as a special case of our method, where the symmetry group $G$ and its representation $\pi$ are known. Our encoder $\phi$ then becomes a $G$-equivariant function in the sense that $$\phi(\pi'(g, x)) = \phi((\psi \circ \pi(g) \circ \phi)(x)) = \pi(g)\phi(x)$$ (4) In other words, by fixing $\pi$ to a known group representation, our method learns a $G$-equivariant representation $z = \phi(x)$. Compared to other methods, LaLiGAN can learn equivariant representation without any knowledge of the group transformation associated with each data sample. Joint discovery of governing equation. LaLiGAN is analogous to latent space equation discovery techniques [Champion et al., 2019] in terms of using an autoencoder network for nonlinear coordinate transformations. We can use the latent space learned by LaLiGAN for discovering equations. Concretely, if we want to find a latent space governing equation parameterized by $\theta$: $\dot{z} = F_\theta(z)$, where $z = \phi(x)$ is obtained from our encoder network, we fix the encoder $\phi$ and optimize $\theta$ with the objective $l_{eq} = \mathbb{E}_{x,z}\|(\nabla_x z)\dot{x} - F_\theta(z)\|^2$. While equation discovery and symmetry discovery are two seemingly distinct tasks, we will show in the experiment that learning a symmetric latent space can significantly improve the quality of the discovered equation in terms of both its simplicity and its long-term prediction accuracy. 5 Latent Symmetry in Dynamical Systems 5.1 Datasets Reaction-diffusion. Many high-dimensional datasets in practical engineering and science problems derive from dynamical systems governed by partial differential equations. These systems often do not exhibit simple linear symmetries in the observation space, but their dynamics might evolve on a low-dimensional manifold with interesting symmetry properties. As an example, we consider a $\lambda-\omega$ reaction-diffusion system [Champion et al., 2019] governed by $$u_t = (1 - (u^2 + v^2))u + \beta(u^2 + v^2)v + d_1(u_{xx} + u_{yy})$$ $$v_t = -\beta(u^2 + v^2)u + (1 - (u^2 + v^2))v + d_2(u_{xx} + u_{yy})$$ (5) with $d_1 = d_2 = 0.1$ and $\beta = 1$. We discretize the 2D space into a $100 \times 100$ grid, which leads to an input dimension of $10^4$. Figure 4b visualizes a few snapshots of this system. We simulate the system up to $T = 6000$ timesteps with step size $\Delta t = 0.05$. The reaction-diffusion system is an example of low-dimensional latent symmetry in high-dimensional observations. In fact, the absence of linear symmetry is not exclusive to high-dimensional systems. We also investigate two low-dimensional dynamics, where their nonlinear evolution prevents any kind of linear symmetry, but our method can still discover meaningful symmetries in the latent space. Nonlinear pendulum. The movement of a simple pendulum can be described by $\dot{q} = p$, $\dot{p} = -\omega^2 \sin(q)$, with $\omega$ being the natural frequency and $q$ and $p$ the angular displacement and angular momentum. In our experiment, we use $\omega = 1$. We simulate $N = 200$ trajectories up to $T = 500$ timesteps with $\Delta t = 0.02$. Lotka-Volterra System. The Lotka-Volterra equations are a pair of nonlinear ODEs that characterize the dynamics of predator-prey interaction. We consider the canonical form of the equations, $\dot{p} = a - bp$, $\dot{q} = cp - d$, where $p$ and $q$ are the logarithm population densities of prey and predator, and the parameters $a, b, c, d$ indicate the growth and death rate of the two populations. In our experiment, we use $a = 2/3$, $b = 4/3$, and $c = d = 1$. We simulate $N = 200$ trajectories up to $T = 10^4$ timesteps with $\Delta t = 0.002$. 5.2 Symmetry Discovery We train LaLiGAN to learn the nonlinear mappings between observations and latent representations, along with the linear symmetry in the latent space. We aim to discover the equivariance of latent Figure 4: Symmetry discovery in reaction-diffusion system with 2D latent space. (a) Latent representations of the system at all timesteps. (b) Randomly selected samples from the dataset. (c) Samples transformed by LaLiGAN are similar to the original data. (d) Samples transformed by the baseline, linear LieGAN, are significantly different from the original data. Figure 5: Latent symmetry discovery in nonlinear pendulum (upper) and Lotka-Volterra equations (lower). (a) Original trajectories of the systems. The color of each trajectory corresponds to its Hamiltonian. (b) The trajectories are mapped to a symmetric latent space. (c) Original trajectories transformed by LaLiGAN. (d) Original trajectories transformed by linear LieGAN. dynamics, i.e. \( z_{t+1} = f(z_t) \Rightarrow gz_{t+1} = f(gz_t) \). Therefore, we take two consecutive timesteps as input, encode them to latent representations with the same encoder weights, and apply the same transformations sampled from the symmetry generator. For the reaction-diffusion system, we follow the setting in Champion et al. (2019) and set the latent dimension \( k = 2 \). Figure 4a shows how the system evolves in the latent space throughout \( T = 5000 \) timesteps. The Lie algebra basis discovered in the latent space is \( L = [0.06, -3.07; 3.05, -0.04] \). This suggests an approximate SO(2) symmetry, which is also evident from the visualization. For the pendulum and the Lotka-Volterra system, we also set the latent dimensions to 2, which is the same as their input dimensions. Figure 5b shows the trajectories of these two systems in the latent space, with the discovered symmetries \( L_{\text{pendulum}} = [0, -5.24; 2.16, 0] \) and \( L_{\text{LV}} = [0, 2.43; -2.74, 0] \). These indicate rotation symmetries up to a certain scaling in the latent dimensions. The validity of the discovered symmetry can be verified by visually inspecting the difference between the transformed and the original samples. For the reaction-diffusion system, Figure 4c shows some samples with random transformations produced by our method, which are similar to the original data displayed in Figure 4b. We also apply the original LieGAN to this task for comparison, and the transformed samples are shown in Figure 4d. These samples contain obvious artifacts and are noticeably different from the original data, which suggests the necessity of our method when linear symmetry does not exist in observation space. Similarly, for the pendulum and the Lotka-Volterra system, we use the learned symmetries to transform each entire trajectory, as is shown in Figure 5c. Each trajectory is transformed from the original trajectory of the same color. While each individual data point is taken into a new position, the entire trajectories remain similar before and after transformation, suggesting that the discovered transformations are indeed the symmetries of these systems. In contrast, the linear symmetries learned by LieGAN do not preserve valid trajectories in the observation space, as shown in Figure 5d. 5.3 Effect of Latent Dimensionality The latent dimension $k$ is a hyperparameter in our method. However, it is not always possible to choose the perfect latent dimension that matches the intrinsic dimension of the system and uncovers symmetry in latent space. To study the robustness of our method under a less ideal hyperparameter configuration, we set the latent dimension to $k = 3$ for the reaction-diffusion system and repeat the experiment. As shown in Figure 6a, the Lie algebra representation is skew-symmetric, which indicates the symmetry of rotations around a particular axis. This can be easily confirmed as all the latent representations roughly dwell on a circular 2D subspace. Although it is not the most simple representation, our method still manages to discover the rotation symmetry as in 2D latent space. ![Figure 6](image) Figure 6: Modeling reaction-diffusion system in 3D latent space. (a) The latent representations before and after our discovered symmetry transformations. (b) The discovered latent space with SINDy but without LaLiGAN. (c-d) Simulation trajectory in the previous two latent spaces. 5.4 Equation Discovery We demonstrate the benefit of learning latent symmetry by using the latent space to discover governing equations. This is a commonly considered problem in these dynamical systems. We use SINDy (Brunton et al., 2016) (Champion et al., 2019) as the equation discovery algorithm, with up to second order polynomials as candidate functions. The comparison is made between applying SINDy on the latent space learned by our method (LaLiGAN + SINDy) and using the SINDy autoencoder to learn its own latent space (SINDy AE). The results for the reaction-diffusion system are shown in Table 1. The discovered equations from both methods have similar forms in the 2D latent space. In the 3D latent space, the governing equation learned in the LaLiGAN latent space remains linear. On the other hand, applying the SINDy autoencoder alone results in a nonsymmetric latent space (Figure 6b) and a highly complicated governing equation with second-order terms. | Method | 2D | 3D | |-----------------|--------------------------------------------------------------------|--------------------------------------------------------------------| | LaLiGAN + SINDy | $\dot{z}_1 = -0.91z_2$, $\dot{z}_2 = -0.91z_1$ | $\dot{z}_1 = 0.58z_2 - 0.40z_3$, $\dot{z}_2 = -0.56z_1 + 0.54z_3$, $\dot{z}_3 = 0.45z_1 - 0.57z_2$ | | SINDy AE | $\dot{z}_1 = -0.85z_2$, $\dot{z}_2 = 0.97z_1$ | $\dot{z}_1 = 0.65z_2 - 0.16z_3 + \Theta(z^2)$, $\dot{z}_2 = -0.57z_1 + 0.18z_2 + \Theta(z^2)$, $\dot{z}_3 = 0.45z_1 - 0.57z_2 + \Theta(z^2)$ | Table 1: Equation discovery on 2D/3D latent spaces for R-D system. Complete results are available in Appendix A.1. Long-term forecasting. To further verify the accuracy of the discovered equations, we use these equations to simulate the dynamics in the latent space. Concretely, given the initial input frame $x_0$, we obtain its latent representation $\hat{z}_0 = \phi(x_0)$ and predict the future $T$ timesteps by iteratively computing $\hat{z}_{t+1} = \hat{z}_t + F(\hat{z}_t) \cdot \Delta t$, where $\hat{z} = F(\hat{z})$ denotes the discovered governing equation. Then, we map the representations back to the input space by $\hat{x}_t = \psi(\hat{z}_t)$. Figure 6c and 6d show the simulated latent trajectories from the equations discovered in 3D latent space with and without LaLiGAN. The trajectory remains close to ground truth in the symmetric latent space but diverges quickly for the equation discovered by SINDy AE. We also evaluate the forecasting accuracy quantitatively by the relative MSE between the prediction and ground truth in the observation space, as is shown in Figure 7. Besides the symbolic models in Table 1, we also include Neural ODE (Chen et al., 2018) as a baseline. Similar to the symbolic equation discovery, it can also predict the dynamics at arbitrary timesteps with an ODE parametrized by neural nets. Figure 7 shows that the discovered equation learned with latent space symmetry outperforms both the equation from vanilla SINDy AE and the Neural ODE model in this task of long-term dynamics forecasting. We also conduct the same experiments of equation discovery and long-term forecasting as for the nonlinear pendulum and the Lotka-Volterra system. While they have simple closed-form governing equations in the observation space, we find that discovering a latent space with learnable symmetry can still be beneficial. The symmetry enforces linear governing equations and leads to reduced error accumulation in long-term forecasting. The detailed results are available in Appendix A.2. 6 Learning Equivariant Representation Figure 8: Learning equivariant representation of the double-bump world. (a) Learned latent space as the direct sum of two 2D subspaces. The color of the data points corresponds to the location of the rectangular bump in the first component and the triangular bump in the second. (b) From left to right: (1) an original signal $x \in \mathbb{R}^{64}$; (2) reconstructed signal $\psi(\phi(x))$; (3-4) reconstructed signals from transformed latent representations, $\psi((\pi(\theta_1) \oplus I)\phi(x))$ and $\psi((I \oplus \pi(\theta_2))\phi(x))$. The red lines are the bump centers in the original signal. When we know the linear group representation, we can use LaLiGAN for learning the corresponding group equivariant representation. Unlike previous works (Garrido et al., 2023; Shakerinava et al., 2022), we learn it without any knowledge of the group element associated with each data point. We consider the example of a double-bump world in Shakerinava et al. (2022). It consists of a rectangular and a triangular bump signal, both cyclically shifted in a fixed-length window. We use the original experiment setting with signal length 64 and bump length 16, visualized in Figure 8B. The cyclic translation of each bump forms an SO(2) group. As each bump is shifted independently, the symmetry group for the composed signal is SO(2) × SO(2). Therefore, we use a 4-dimensional latent space $Z = \mathbb{R}^2 \oplus \mathbb{R}^2$ and fix the Lie algebra basis to $L = L_1 \oplus L_2$, $L_1 = L_2 = [0, 1; -1, 0]$. Figure 8A shows the latent space learned by LaLiGAN. We observe that rotation in the first component shifts the rectangular bump, while rotation in the second component simultaneously shifts both bumps. This is also evident from the transformed and reconstructed samples in Figure 8E. This provides an example that our method can learn equivariant representations when we do not know the group transformation of each data point. We also include another experiment on SO(3) equivariant representation for a 3D object in Appendix A.4. 7 Conclusion We propose LaLiGAN, a novel generative modeling framework for discovering nonlinear symmetries. LaLiGAN decomposes the group action as a linear representation on a latent space and a pair of nonlinear mappings between the latent space and the observation space. By jointly optimizing the group representation and the nonlinear mappings, it discovers both the symmetry group and its nonlinear group action on the data. We also show that it can be applied to downstream tasks such as equation discovery, leading to equations with simpler forms and better long-term prediction accuracy. In the future, we plan to study how the knowledge of latent space symmetry can be better incorporated into equation discovery. For example, symmetry can act as a constraint to compress the search space for equations and accelerate the search. We also plan to investigate the connection between symmetry and other physical properties such as conservation laws. Given the prevalence of symmetries in the natural world, our long-term goal is to develop a general framework for automatically discovering symmetries and other types of governing laws from data and accelerate scientific discovery process. REFERENCES Erik J Bekkers. B-spline cnns on lie groups. *arXiv preprint arXiv:1909.12057*, 2019. Gregory Benton, Marc Finzi, Pavel Izmailov, and Andrew G Wilson. Learning invariances in neural networks from training data. *Advances in neural information processing systems*, 33:17605–17616, 2020. Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. *arXiv preprint arXiv:2104.13478*, 2021. Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. *Proceedings of the national academy of sciences*, 113(15):3932–3937, 2016. Hugo Caselles-Dupré, Michael Garcia Ortiz, and David Filliat. Symmetry-based disentangled representation learning requires interaction with environments. *Advances in Neural Information Processing Systems*, 32, 2019. Kathleen Champion, Bethany Lusch, J Nathan Kutz, and Steven L Brunton. Data-driven discovery of coordinates and governing equations. *Proceedings of the National Academy of Sciences*, 116(45):22445–22451, 2019. Evangelos Chatzipantazis, Stefanos Pertigkiozoglou, Edgar Dobriban, and Kostas Daniilidis. Learning augmentation distributions using transformed risk minimization. *arXiv preprint arXiv:2111.08190*, 2021. Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. *Advances in neural information processing systems*, 31, 2018. Taco Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral cnn. In *International conference on Machine learning*, pp. 1321–1330. PMLR, 2019a. Taco S Cohen, Mario Geiger, Jonas Köhler, and Max Welling. Spherical cnns. *arXiv preprint arXiv:1801.10130*, 2018. Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant cnns on homogeneous spaces. *Advances in neural information processing systems*, 32, 2019b. Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, and Marin Soljačić. Equivariant contrastive learning. *arXiv preprint arXiv:2111.00899*, 2021. Nima Dehmamy, Robin Walters, Yanchen Liu, Dashun Wang, and Rose Yu. Automatic symmetry discovery with lie algebra convolutional network. *Advances in Neural Information Processing Systems*, 34:2503–2515, 2021. Krish Desai, Benjamin Nachman, and Jesse Thaler. Symmetry discovery with deep learning. *Physical Review D*, 105(9):096031, 2022. Luca Falorsi, Pim De Haan, Tim R Davidson, Nicola De Cao, Maurice Weiler, Patrick Forré, and Taco S Cohen. Explorations in homeomorphic variational auto-encoding. *arXiv preprint arXiv:1807.04689*, 2018. Luca Falorsi, Pim de Haan, Tim R Davidson, and Patrick Forré. Reparameterizing distributions on lie groups. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 3244–3253. PMLR, 2019. Urban Fasel, J Nathan Kutz, Bingni W Brunton, and Steven L Brunton. Ensemble-sindy: Robust sparse model discovery in the low-data, high-noise limit, with active learning and control. *Proceedings of the Royal Society A*, 478(2260):20210904, 2022.
WYEEWScbaM
What does the authors mean by “synthesizing an ordered tensor…”. Is that the input points? If so, please clarify and use it consistently as mentioned in the beginning of Section 2. What is the notion of having a parallel method? Is not it straightforward?
COMMUNICATION-EFFICIENT FEDERATED LEARNING VIA GRADIENT DISTILLATION Anonymous authors Paper under double-blind review ABSTRACT Federated learning revolutionizes collaborative model training across decentralized edge devices, ensuring privacy by avoiding direct data sharing. However, the frequent exchange of model updates introduces a significant communication overhead. The conventional FL process involves transmitting the differences in parameters between old and new models, resulting in redundant gradient communications due to the intricate interplay between model parameters and network architecture. Even minor adjustments to parameters necessitate the retransmission of entire models. In this paper, we introduce a groundbreaking concept known as gradient distillation, which decouples model parameters from network architecture, enabling the transmission of only essential information needed for synchronization. By leveraging gradient distillation, we approximate gradient disparities into a synthetic tensor sequence, allowing the recipient to reconstruct the sender’s intended model update. This approach eliminates the need to transmit the entire set of raw parameter differences, offering a highly promising solution for achieving greater communication efficiency while without significant accuracy degradation. Experimental results demonstrate that our approach achieves an unprecedented level of gradient compression performance, surpassing widely recognized baselines by an impressive margin of orders of magnitude. 1 INTRODUCTION Federated learning (FL) (McMahan et al., 2017; Shokri & Shmatikov, 2015) is a promising paradigm in machine learning that addresses the challenge of training models on decentralized data sources. Traditional machine learning approaches rely on centralized servers to collect and process all the data used for training. However, in many real-world scenarios, this centralized approach is impractical due to privacy concerns (Ching et al., 2018), regulatory constraints (GDPR, 2016), or the sheer volume of data generated at the edge (Zhou et al., 2019). FL emerged as a solution to this problem by allowing model training to occur directly on edge devices where the data are generated, without the need to transmit sensitive information to a central server. This approach has gained significant traction in recent years, particularly with the proliferation of mobile devices (Wang et al., 2020; Chen et al., 2023), IoT devices (Imteaj et al., 2021; Zhang et al., 2022), and edge computing (Wang et al., 2019; Nguyen et al., 2021). The decentralized nature of FL makes it well-suited for scenarios where data privacy and regulatory compliance are critical, such as in healthcare applications (Xu et al., 2021; Courtiol et al., 2019), financial transactions (Long et al., 2021; Kaplan, 1989), and other contexts where sensitive information is involved (Niu & Deng, 2022; Li et al., 2021). The inherent design of FL ensures that raw data remains securely stored on edge devices, rendering it inaccessible to the central server. This property is fundamental to the privacy-preserving aspect of FL. Instead of raw data, model updates are transmitted from participating devices to a central server, where they are aggregated to refine the global model (McMahan et al., 2017). Nevertheless, contemporary neural network models are characterized by a staggering number of parameters, often ranging in the millions or even billions. This is particularly exemplified by the immensely popular large language models (Brown et al., 2020), which can possess hundreds of billions of parameters. In many real-world scenarios, especially those involving decentralized data sources like mobile devices, transmitting such vast amounts of model parameters to a central server may be impractical or infeasible due to limitations in network bandwidth, high latency, or intermittent connectivity. The substantial communication cost acts as a barrier, impeding FL from effectively scaling the training process to accommodate a larger number of participants. Considering the communication-efficiency issue in FL is imperative for ensuring the practicality, scalability, and energy efficiency of the approach, especially in real-world applications where decentralized data sources are prevalent. Communication-efficient federated learning has gained significant attention in recent years (Aji & Heafield, 2017; Reiszadeh et al., 2020; Hönig et al., 2022; Liu et al., 2023). Existing approaches can be broadly categorized into several main strategies, each with its own set of limitations. The first strategy involves allowing devices to perform multiple local updates before transmitting their model updates to the central server, thereby reducing the frequency of communication rounds (McMahan et al., 2017; Haddadpour et al., 2019). While this approach reduces the overall transmission data amount, it may result in slower convergence and potential overfitting if not carefully managed. The second strategy focuses on compressing gradient information (Liu et al., 2023; Reiszadeh et al., 2020). This includes methods such as quantization (Hönig et al., 2022), which reduces the precision of gradient values to decrease the transmitted information volume, and sparsification (Aji & Heafield, 2017; Dai et al., 2022), which involves sending only a subset of gradients by retaining the most significant ones and setting others to zero. However, aggressive quantization and sparsification can lead to information loss and potentially hinder model accuracy. Additionally, pruning (Zhu et al., 2022; Wang et al., 2022) is employed to remove less influential weights or neurons from the model, effectively reducing the parameter count and, subsequently, the size of gradients. However, pruning methods require careful hyperparameter tuning and may result in model degradation if not applied judiciously. Another strategy focuses on uploading logits (Sattler et al., 2020) or dataset representations (Xiong et al., 2023). Instead of transmitting raw gradients, devices calculate and send logits (pre-softmax outputs) for their data samples to the central server. Alternatively, devices can convey aggregated representations of their datasets, such as centroids or other statistical summaries (Liu et al., 2022), which serve as a compact proxy for the raw gradients. These strategies offer diverse avenues to tackle communication overhead in FL, each with distinct trade-offs in terms of communication reduction, computational complexity, and potential impact on model performance. In this paper, we introduce a groundbreaking concept called gradient distillation, which exhibits unprecedented performance in FL communication compression, surpassing the renowned baseline FedAvg by a remarkable margin of $1904\times$ on benchmarking medical dataset PathMNIST. The conventional FL technique involves transmitting parameter differences between old and new models, resulting in redundant gradient communication due to the intricate relationship between model parameters and network architecture. Even minor parameter adjustments necessitate the retransmission of entire models. To tackle this issue, we propose decoupling model parameters from network architecture, enabling transmission of only essential information for synchronization. By employing gradient distillation, we approximate gradient disparities into a synthetic tensor sequence, allowing the recipient to reconstruct the sender’s intended model update. This parameter-structure decoupling leads to a significant reduction in gradient communication, as it avoids the need to transmit the entire set of raw parameter differences. Our method offers a highly promising solution for achieving more efficient FL. It greatly enhances communication efficiency, a critical concern in FL systems, ultimately improving the scalability and applicability of FL in real-world applications, particularly for scenarios with limited bandwidth or intermittent connectivity. The main contributions of this paper are summarized as follows: • We introduce Gradient Distillation, a novel approach of distilling the structural essence of gradients rather than directly compressing them, allowing for the transmission of only the indispensable information for model updates. • Experimental results demonstrate that gradient distillation reduces communication by orders of magnitude compared to baselines without significant accuracy degradation, enabling highly communication-efficient federated learning. • Our method provides a new perspective on overcoming communication bottlenecks in federated learning, facilitating the application of federated learning at scale on massively distributed devices with limited bandwidth. 2 METHODOLOGY In this paper, we consider federated learning across $N$ edge devices with heterogeneous bandwidth resources. For each device $i$, it possesses a private local dataset $\mathcal{D}_i = \{(x^i_j, y^i_j)\}_{j=1}^{m_i}$ drawn from a unique distribution $\mathcal{P}_i$ over $\mathcal{X} \times \mathcal{Y}$. Federated learning pursues collaboratively training a global model without directly accessing private data. To achieve this, edge devices periodically transmit their local model updates to a central server and receive the broadcasted global model. The ultimate objective is to obtain a global model that minimizes the risk across all private datasets, i.e., $$\arg \min_w L(w) \triangleq \frac{1}{N} \sum_{i=1}^{N} L_i(w),$$ where $w$ is the parameter of the model, $L_i(w) = \frac{1}{m} \sum_{j=1}^{m_i} \ell(w; (x_j, y_j))$ denotes the empirical risk with respect to device $i$, and $\ell$ represents the loss function. Our goal is to reduce the communication overhead incurred during this distributed training process. ### 2.1 Gradient Distillation **Motivation.** In each round of FL, the server broadcasts the aggregated global model from the preceding round to the client, while each client performs local training to generate an updated local model. Traditional methods involve transmitting parameter differences between the old and new models. However, this approach maintains a constant data transfer volume irrespective of changes in the numerical value of the parameter difference, as long as the network architecture remains fixed during the FL process. This introduces redundancy in gradient transmission. We claim that this issue arises due to the interdependence between model parameters and network architecture. Since network parameters and structure are intertwined, even slight parameter adjustments necessitate the retransmission of entire models. To counteract this, we propose decoupling model parameters from network architecture to transmit only the essential information required for synchronization. Specifically, we approximate the gradient disparities between models into a synthetic tensor sequence, employing a distillation-inspired concept. The recipient (i.e., the server side) can then reconstruct the intended model update of the sender (i.e., the client side) by taking a single descent step on these ordered tensors in conjunction with the previous model state. Through this parameter-structure decoupling, our approach transmits only the indispensable information for model updates, bypassing the need for transmitting the whole amount of raw parameter differences. This results in a substantial reduction in gradient communication. We refer to this approach as **gradient distillation**, which will be introduced in detail in the following. **Gradient Distillation.** For a model with parameters $w$, we define its difference between two periods $t_1$ and $t_2$ as $\Delta \omega(t_1, t_2) = \omega(t_1) - \omega(t_2)$. By approximating $\Delta \omega(t_1, t_2)$, we can update the model parameters at timestamp $t_2$ even when we only have access to the stale model $\omega(t_1)$. To achieve this, we synthesize an ordered tensor sequence $\zeta = \{(\hat{x}_j, \hat{y}_j)\}_{j=1}^{m}$, which is tailored to approximate the parameter difference using one-step gradient descent on $\omega(t_1)$. Our objective is to discover the shortest projected path between $\omega(t_1)$ and $\omega(t_2)$. To synthesize this ordered sequence, we minimize the error between the parameter difference and the sequence gradient on $\omega(t_1)$: $$\{(\hat{x}_j, \hat{y}_j)\}_{j=1}^{m} = \arg \min_{\{(x_j, y_j)\}_{j=1}^{m}} \left\| \text{optim} \left( \sum_{i=1}^{m} \frac{\partial \ell(\omega(x_j, y_j))}{\partial \omega} \right|_{\omega=\omega(t_1)} \right) - \Delta \omega(t_1, t_2) \right\|_2^2,$$ where $m$ represents the solved length of the optimal sample sequence, and $\text{optim}(g)$ denotes some operators performed on the gradient $g$ involved in the optimizer such as SGD or Adam. Solving this optimization problem directly poses a significant challenge. Therefore, we adopt a greedy approach to find an approximate solution, illustrated in Fig. 1 (c). The objective is to identify a minimal sequence of synthetic tensors, each associated with labels, that can effectively replicate the true gradient when employed to train the original model. This effectiveness is assessed by evaluating the disparity between the reproduced and actual gradients. Through a step-by-step, iterative process, we incrementally construct the optimal sample sequence. This greedy strategy offers a practical means of working towards the broader objective of obtaining a concise statistical representation of localized model updates. ### 2.2 Gradient Distillation based Federated Learning We now outline the workflow of the proposed Gradient Distillation based Federated Learning (FedGD) framework, encompassing the following key states: initialization, broadcasting, local training, uploading and global aggregation. – **Initialization:** The first $M$ rounds serve as the initialization stage, for which the training process adheres to the standard federated learning approach. The central server initiates the process by broadcasting an initial global model $\omega_g^{(0)}$ to all participating devices. Each device then conducts local training on this model, utilizing its individual private data. Upon completion of the local training, the devices upload their respective updated local models. Subsequently, the central server aggregates these models to generate a new global model. After $M$ rounds, the central server possesses the global model with parameter $\omega_g^{(M)}$. For the subsequent training rounds, we introduce the proposed FedGD strategy to reduce the communication burden between the central server and the edge clients, for both the global model broadcasting (downlink communication) and local updates uploading (uplink communication). The workflow is offered in Fig. 2 (b). Specifically, for the $t$-th round, our framework works as follows: – **Broadcasting:** In this stage, the central server performs gradient distillation between the new global model $\omega_g^{(t)}$ and the broadcasted model $\omega_g^{(t-1)}$ in the last round. The objective is to obtain a compressed datastream $\zeta_g^{(t)}$ for downlink communication, which is a tensor of length $m_g^{(t)}$. To achieve this, the server solves the following optimization problem: $$\zeta_g^{(t)} = \arg\min_{\{(x_j, y_j)\}_{j=1}^{m_g^{(t)}}} \left\| \text{optim} \left( \sum_{j=1}^{m_g^{(t)}} \frac{\partial \ell(\omega; (x_j, y_j))}{\partial \omega} \bigg|_{\omega=\omega_g^{(t-1)}} \right) - \Delta \omega_g(t-1, t) \right\|_2^2,$$ where $m_g^{(t)}$ represents the length of tensor $\zeta_g^{(t)}$, and $\Delta \omega_g(t-1, t)$ denotes the difference between $\omega_g^{(t)}$ and $\omega_g^{(t-1)}$. Instead of broadcasting the parameter difference, the server broadcasts $\zeta_g^{(t)} = \{(\hat{x}_j, \hat{y}_j)\}_{j=1}^{m_g^{(t)}}$ to all participating devices, which significantly reduces the downlink burden. – **Local Training:** Upon receiving the broadcasted tensor $\zeta_g^{(t)}$, each edge device recovers the intended global model $\omega_g^{(t)}$ using the global model $\omega_g^{(t-1)}$ from the last round. This recovery process is achieved through one-step gradient descent on $\zeta_g^{(t)}$: $$\omega_g^{(t)} = \omega_g^{(t-1)} - \text{optim} \left( \sum_{(\hat{x}, \hat{y}) \in \zeta_g^{(t)}} \frac{\partial \ell(\omega; (\hat{x}, \hat{y}))}{\partial \omega} \bigg|_{\omega=\omega_g^{(t-1)}} \right).$$ Furthermore, for each edge device $i$, started from the model $\omega_i^{(t)}$, it performs local training on its private dataset for $K$ iterations to update its parameters as $\omega_i^{(t,k)}$: $$\omega_i^{(t,k)} \leftarrow \omega_i^{(t,k-1)} - \gamma \sum_{(x,y) \in B_i} \frac{\partial \ell(\omega; (x, y))}{\partial \omega} \bigg|_{\omega=\omega_i^{(t,k-1)}} \quad \text{for } k \in [K],$$ where $\omega_i^{(t,0)} = \omega_i^{(t)}$, $\omega_i^{(t,K)} = \omega_i^{(t,K)}$, and $B_i$ is the $i$-th random batch drawn from $D_i$. – **Uploading:** After local training, each device $i$ executes gradient distillation between the global model $\omega_g^{(t)}$ and the updated local model $\omega_i^{(t)}$: $$\zeta_i^{(t)} = \arg\min_{\{(x_j, y_j)\}_{j=1}^{m_i^{(t)}}} \left\| \text{optim} \left( \sum_{j=1}^{m_i^{(t)}} \frac{\partial \ell(\omega; (x_j, y_j))}{\partial \omega} \bigg|_{\omega=\omega_i^{(t)}} \right) - (\omega_g^{(t)} - \omega_i^{(t)}) \right\|_2^2,$$ where $m_i^{(t)}$ denotes the length of tensor $\zeta_i^{(t)}$. The value of $m_i^{(t)}$ is related to the difference between $\omega_i^{(t)}$ and $\omega_g^{(t)}$, which varies in each round. Each device $i$ uploads the synthetic samples $\zeta_i^{(t)}$ to the server, which significantly reduces the uplink communication burden. – **Global Aggregation:** Upon receiving the uploaded tensor $\zeta_i^{(t)}$, the central server performs backpropagation on the model $\omega_g^{(t)}$ with data $\zeta_i^{(t)}$ to recover the intended updated local model $\omega_i^{(t)}$ for each device $i$. This is achieved by addressing the following optimization problem: $$\omega_i^{(t)} = \omega_g^{(t)} - \text{optim} \left( \sum_{(\hat{x}, \hat{y}) \in \zeta_i^{(t)}} \frac{\partial \ell(\omega; (\hat{x}, \hat{y}))}{\partial \omega} \bigg|_{\omega=\omega_g^{(t)}} \right).$$ Algorithm 1: Gradient Distillation based Communication-efficient Federated Learning Input: $N$ edge devices with private datasets $\{D_i\}_{i=1}^N$, communication round number $T$, learning rate $\eta$, initial rounds $M$, local update number $K$, batchsize $B$. Output: FL-trained global model $\omega_g^T$. Server Executes: Initialization: Following the standard FedAvg workflow. After $M$ rounds, the central server has $\omega_g^{(M)}$ with $\omega_g^{(M-1)}$, and the device has $\omega_g^{(M-1)}$ for each communication round $t = M, \ldots, T$ do Obtain $\zeta_g^{(t)}$ by performing gradient distillation between $\omega_g^{(t-1)}$ and $\omega_g^{(t)}$ with Eq. (3) for each device $i = 1, 2, \ldots, N$ in parallel do Broadcasting $\zeta_g^{(t)}$ to device $i$ $\zeta_i^{(t)} \leftarrow$ Device Executes $(i, \zeta_g^{(t)})$ Reconstruct the local model $\omega_i^{(t)}$ by one-step gradient descent on $\zeta_i^{(t)}$ with Eq. (7) end $\omega_g^{(t+1)} \leftarrow \frac{1}{N} \sum_{i=1}^N \omega_i^{(t)}$ end Device Executes $(i, \zeta_g^{(t)})$: Reconstruct the global model $\omega_g^{(t)}$ by one-step gradient descent on $\zeta_g^{(t)}$ with Eq. (4) Update the local model as $\omega_i^{(t)}$ by local training on $D_i$ with Eq. (5) Obtain $\zeta_i^{(t)}$ by performing gradient distillation between $\omega_g^{(t)}$ and $\omega_i^{(t)}$ with Eq. (6) Return $\zeta_i^{(t)}$ end The server then aggregates the reconstructed local models $\omega_i^{(t)}$ from all selected devices to obtain an updated global model $\omega_g^{(t+1)}$: $$\omega_g^{(t+1)} = \frac{1}{N} \sum_{i=1}^N \omega_i^{(t)}. \quad (8)$$ In essence, the compact tensor sequence $\zeta_i^{(t)}$ allows the server to efficiently recreate device $i$’s full model update through one forward pass, instead of directly transmitting the high-dimensional model weights $\omega_i^{(t)}$, while preserving accurate information to synchronize the global and local models. Remark. As described above, our approach involves performing gradient distillation at both the central server and edge devices, which introduces slightly higher computational demands for these components. It is important to highlight that, in comparison to conventional FedAvg, our method does not incur longer overall training time. On the contrary, it actually leads to much shorter training times per round. This improvement is attributed to the fact that gradient distillation substantially reduces the amount of data that needs to be transmitted, consequently reducing the time required for transmission. This has been empirically verified in experiment section, as shown in Table 3. This efficiency gain is a significant advantage of our approach, as it allows for quicker model updates. It is worth noting that communication resources are typically much more constrained than computation resources. Therefore, the approach of reducing the communication burden by slightly increasing the computation burden is highly practical for many real-world applications. 2.3 Parallel Version To further mitigate potential efficiency losses due to the extra computational steps, inspired by parallel federated learning framework (Zhang et al., 2023), we concurrently conduct server-side model aggregation while allowing for device-level local training. By overlapping aggregation stage with local update stage, our solution maintains substantial communication savings without compromising computational throughput, as indicated in Fig. 2 (c). Specifically, for the $t$-th round, the parallel version of our framework (Parallel FedGD) works as follows: – Client Side. Table 1: Test accuracy (%) and data transmission size reduction ratios of FedGD and baseline methods on CIFAR-10 using different networks. | Model | Method | Avg. data transmission Volume (MB) | Speedup | Min. data transmission Volume (MB) | Speedup | Accuracy (%) | |-------------|-------------------------|------------------------------------|---------|-----------------------------------|---------|--------------| | MobileNet | FedAvg | 4.21 | 1× | 4.21 | 1× | 82.48 | | | Top-k (Aji & Heafield, 2017) | 2.11 | 2× | 2.11 | 2× | 78.85 | | | FedPAQ (Rezizadeh et al., 2020) | 1.05 | 4× | 1.05 | 4× | 79.77 | | | DAdaQ (Hönig et al., 2022) | 1.92 | 2.19× | 1.05 | 4× | 80.68 | | | AdaGQ (Liu et al., 2023) | 1.57 | 2.68× | 1.05 | 4× | 80.22 | | | FedGD | 0.0328 | 128× | 0.0061 | 690× | 82.19 | | | Parallel FedGD | 0.0472 | 89× | 0.0061 | 690× | 82.08 | | ShuffleNet | FedAvg | 5.42 | 1× | 5.42 | 1× | 83.15 | | | Top-k (Aji & Heafield, 2017) | 2.71 | 2× | 2.71 | 2× | 79.07 | | | FedPAQ (Rezizadeh et al., 2020) | 1.36 | 4× | 1.36 | 4× | 80.49 | | | DAdaQ (Hönig et al., 2022) | 2.64 | 2.05× | 1.36 | 4× | 81.87 | | | AdaGQ (Liu et al., 2023) | 1.98 | 2.74× | 1.36 | 4× | 81.62 | | | FedGD | 0.0421 | 129× | 0.0031 | 1765× | 82.98 | | | Parallel FedGD | 0.0571 | 95× | 0.0031 | 1765× | 82.74 | | ResNet-18 | FedAvg | 11.69 | 1× | 11.69 | 1× | 85.31 | | | Top-k (Aji & Heafield, 2017) | 5.85 | 2× | 5.85 | 2× | 80.14 | | | FedPAQ (Rezizadeh et al., 2020) | 2.92 | 4× | 2.92 | 4× | 83.39 | | | DAdaQ (Hönig et al., 2022) | 4.97 | 2.35× | 2.92 | 4× | 82.47 | | | AdaGQ Liu et al. (2023) | 4.09 | 2.86× | 2.92 | 4× | 82.09 | | | FedGD | 0.0369 | 317× | 0.0092 | 1268× | 85.11 | | | Parallel FedGD | 0.0508 | 230× | 0.0154 | 759× | 85.03 | • **Local Training:** Each device $i$ receives the broadcasted gradient distillation tensor $\zeta^{(t-1)}_g$ from the server, and reconstructs the global model $\omega^{(t-1)}_g$ based on the previous model $\omega^{(t-2)}_g$: $$\omega^{(t-1)}_g = \omega^{(t-2)}_g - \text{optim} \left( \sum_{(\hat{x}, \hat{y}) \in \zeta^{(t-1)}_g} \frac{\partial \ell(\omega; (\hat{x}, \hat{y}))}{\partial \omega} \bigg|_{\omega=\omega^{(t-2)}_g} \right).$$ Device $i$ then performs local training to get the updated local model $\omega^{(t)}_i$ according to Eq. (5). • **Uploading:** After completing the local training, device $i$ performs gradient distillation to get $\zeta^{(t)}_i$ based on $\omega^{(t)}_i$ and $\omega^{(t-1)}_g$ according to Eq. (6). $\zeta^{(t)}_i$ is then uploaded to the central server. – **Server Side.** • **Broadcasting:** During local training and gradient distillation on edge device $i$ to obtain the local model $\omega^{(t)}_i$ and distilled tensor $\zeta^{(t)}_i$, the central server performs global aggregation in parallel to derive the updated global model $\omega^{(t)}_g$ and its distilled form $\zeta^{(t)}_g$. By leveraging parallel optimization, when the server receives all distilled local gradients $\zeta^{(t)}_i$, it will already have finalized the updated global gradient $\zeta^{(t)}_g$ through concurrent aggregation. The server can then immediately broadcast $\zeta^{(t)}_g$ to the edge devices to start the next round of federated optimization. • **Global Aggregation:** While the broadcast stage is performing, the central server reconstructing $\omega^{(t)}_i$ by: $$\omega^{(t)}_i = \omega^{(t-1)}_g - \text{optim} \left( \sum_{(\hat{x}, \hat{y}) \in \zeta^{(t)}_g} \frac{\partial \ell(\omega; (\hat{x}, \hat{y}))}{\partial \omega} \bigg|_{\omega=\omega^{(t-1)}_g} \right).$$ Based on $\omega^{(t)}_i$, the central server can get the new global model: $\omega^{(t+1)}_g = \frac{1}{N} \sum_{i=1}^{N} \omega^{(t)}_i$. It then performs gradient distillation between the two latest global models: $$\zeta^{(t+1)}_g = \arg \min_{\{(x_j, y_j)\}_{j=1}^{m^{t+1}_g}} \left\| \text{optim} \left( \sum_{j=1}^{m^{t+1}_g} \frac{\partial \ell(\omega; (x_j, y_j))}{\partial \omega} \bigg|_{\omega=\omega^{(t)}_g} \right) - \Delta \omega_g(t, t+1) \right\|^2.$$ This parallel execution scheme, with the server and devices simultaneously performing their respective operations, significantly improves the computational efficiency of FedGD. 3 EXPERIMENTS 3.1 EXPERIMENTAL SETUP Baselines. We compare our proposed FedGD approach with five baseline methods: (1) FedAvg, (2) Top-k (Aji & Heafield, 2017), (3) FedPAQ (Reiszadeh et al., 2020), (4) DAdaQ (Hönig et al., 2022), and (5) AdaGQ (Liu et al., 2023). Datasets. Experiments are conducted on three benchmark datasets: CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100, and a medical image dataset PathMNIST (Yang et al., 2023). PathMNIST is a dataset for predicting survival from colorectal cancer histology slides. It contains 100,000 images in the training set and 7,180 images in the test set. The image size is $3 \times 28 \times 28$. For CIFAR-10, we evaluate the generalization of our approach to different network architectures by testing with MobileNet (Howard et al., 2017), ShuffleNet (Zhang et al., 2018), and ResNet-18 (He et al., 2016). To validate performance across datasets, we use MobileNet for experiments on CIFAR-100 and PathMNIST. All baselines use the same network architecture as FedGD for a fair comparison. Implementation Details. We implement FedGD as well as the baseline methods using the PyTorch framework. The SGD optimizer with a learning rate of 0.01 is used for all approaches. Unless otherwise stated, the batch size is 64 and the number of local epochs per round is set to 3. The training process spans 150 communication rounds, with gradient distillation initiated from the 31st round onwards. Following the commonly used simulation setting (Liu et al., 2023; Hönig et al., 2022), in our experiments, we simulate 50 virtual devices, and set the bandwidth to 50 Mbps. 3.2 PERFORMANCE EVALUATION Table 1 shows the test accuracy, average and minimum data transmission size reduction ratios of FedGD and the baseline methods in CIFAR-10 using three widely used network architectures. FedAvg serves as the comparison benchmark. Among the compared methods, Top-k and FedPAQ, which do not adopt adaptive schemes, have equal average and minimum compression ratios, and lower accuracy than FedAvg (by 3.63% and 2.71% for MobileNet). DAdaQ and AdaGQ apply different adaptive quantization schemes based on time and gradient norms. AdaGQ saves $2.68\times$ in communication compared to $2.19\times$ for DAdaQ, but with a 0.46% lower precision. Our method, FedGD achieves higher precision than all baselines at 82.08%, only 0.4% lower than the uncompressed FedAvg. Most notably, it provides average $128\times$ compression, reaching up to $690\times$ in later rounds as the differences in network parameters shrink. Across MobileNet, ShuffleNet, and ResNet-18, FedGD significantly outperforms baselines in balancing training accuracy and communication efficiency. To evaluate the generalizability of our approach across different datasets, we further validate it on CIFAR-100 and PathMNIST in addition to the previous experiments. As shown in Table 2: | Model | Method | Avg. data transmission Volume (MB) | Speedup | Min. data transmission Volume (MB) | Speedup | Accuracy (%) | |-------------|-------------------------|------------------------------------|---------|-----------------------------------|---------|--------------| | CIFAR-100 | FedAvg | 11.69 | 1× | 11.69 | 1× | 60.17 | | | Top-k (Aji & Heafield, 2017) | 5.85 | 2× | 5.85 | 2× | 56.26 | | | FedPAQ (Reiszadeh et al., 2020) | 2.92 | 4× | 2.92 | 4× | 57.38 | | | DAdaQ (Hönig et al., 2022) | 6.22 | 1.88× | 2.92 | 4× | 58.51 | | | AdaGQ (Liu et al., 2023) | 5.65 | 2.07× | 2.92 | 4× | 58.37 | | | FedGD | 0.0089 | 118× | 0.0122 | 958× | 59.84 | | PathMNIST | FedAvg | 11.69 | 1× | 11.69 | 1× | 87.81 | | | Top-k (Aji & Heafield, 2017) | 5.85 | 2× | 5.85 | 2× | 83.20 | | | FedPAQ (Reiszadeh et al., 2020) | 2.92 | 4× | 2.92 | 4× | 84.74 | | | DAdaQ (Hönig et al., 2022) | 4.97 | 2.36× | 2.92 | 4× | 86.17 | | | AdaGQ (Liu et al., 2023) | 4.09 | 2.87× | 2.92 | 4× | 85.92 | | | FedGD | 0.0345 | 339× | 0.0614 | 1904× | 87.54 | Table 3: Results with different batch size, local update number and Pre-distillation rounds. $T_g$ represent the calculated time for gradient distillation, $T_c$ represents the upload communication time of standard FL, $T_c^*$ represents the time of upload communication using gradient distillation FL. | Batch size $B$ | Avg. data volume (MB) | Accuracy (%) | $T_g$ (s) | $T_c^*$ (s) | $T_g + T_c^*$ (s) | $T_c$ (s) | Speedup | |----------------|-----------------------|--------------|-----------|-------------|------------------|----------|---------| | 32 | $3.28 \times 10^{-2}$ | 85.34 | 8.97 | 0.05 | 9.02 | 18.78 | 2.08× | | 64 | $3.69 \times 10^{-2}$ | 85.11 | 10.09 | 0.06 | 10.15 | 18.78 | 1.85× | | 128 | $4.47 \times 10^{-2}$ | 84.97 | 12.22 | 0.07 | 12.29 | 18.78 | 1.53× | | Local round number $K$ | Avg. data volume (MB) | Accuracy (%) | $T_g$ (s) | $T_c^*$ (s) | $T_g + T_c^*$ (s) | $T_c$ (s) | Speedup | |------------------------|-----------------------|--------------|-----------|-------------|------------------|----------|---------| | 1 | $2.08 \times 10^{-2}$ | 85.79 | 5.69 | 0.03 | 5.72 | 18.78 | 3.28× | | 2 | $2.73 \times 10^{-2}$ | 85.42 | 7.46 | 0.04 | 7.50 | 18.78 | 2.5× | | 3 | $3.69 \times 10^{-2}$ | 85.11 | 10.09 | 0.06 | 10.15 | 18.78 | 1.85× | | Pre-distillation rounds $M$ | Avg. data volume (MB) | Accuracy (%) | $T_g$ (s) | $T_c^*$ (s) | $T_g + T_c^*$ (s) | $T_c$ (s) | Speedup | |-----------------------------|-----------------------|--------------|-----------|-------------|------------------|----------|---------| | 30 | $3.69 \times 10^{-2}$ | 85.11 | 10.09 | 0.06 | 10.15 | 18.78 | 1.85× | | 50 | $2.95 \times 10^{-2}$ | 85.17 | 8.07 | 0.05 | 8.12 | 18.78 | 2.31× | | 80 | $2.12 \times 10^{-2}$ | 85.19 | 5.80 | 0.03 | 5.83 | 18.78 | 3.22× | In summary, FedGD achieves state-of-the-art compression ratios while maintaining high model performance, validating the effectiveness of our proposed gradient distillation approach. ### 3.3 Ablation Analysis on Hyperparameters Table 3 displays the compression ratios achieved by our method for varying batch sizes (32, 64, 128), round numbers of local updates (1, 2, 3), and pre-distillation rounds (30, 50, 80). It is evident that lower batch sizes and fewer local updates result in higher compression, as they entail smaller model parameter differences between rounds, containing less information for compression. Notably, compared to a batch size of 128, using a batch size of 32 further reduces communication by 1.36×. Moreover, increasing the number of pre-distillation rounds (i.e., applying compression in later training stages) leads to a reduction in average uploaded parameters, as differences diminish over time. When $M = 80$ rounds, only an average of $2.12 \times 10^{-2}$ MB parameters are uploaded. Additionally, we provide the computation time $T_g$ for gradient distillation, the communication time $T_c^*$ for uploading tensors generated by gradient distillation, and the communication time $T_c$ for uploading model differences. The combined time for $T_g$ and $T_c^*$ is significantly smaller than $T_c$, indicating that our method substantially shortens the training time per round. This improvement is attributed to the fact that gradient distillation significantly reduces the amount of data that needs to be transmitted, consequently reducing the time required for transmission. ### 4 Conclusion In this study, we introduced gradient distillation-based communication-efficient federated learning (FedGD), a novel approach where devices and the server synthesize tensor sequences to represent model updates, rather than transmitting raw model differences. Our key innovation lies in distilling the structural essence of gradients, as opposed to directly compressing them. This enables the transmission of only essential information for synchronization, bypassing the need to transmit the entire set of raw parameter differences. Experimental results demonstrate that FedGD substantially reduces communication overhead without sacrificing accuracy significantly. This highlights its potential to scale privacy-preserving distributed training across edge networks by leveraging model-specific representations. REFERENCES Alham Fikri Aji and Kenneth Heafield. Sparse communication for distributed gradient descent. *arXiv preprint arXiv:1704.05021*, 2017. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6fcb4967418bfb8ac142f64a-Paper.pdf. Rui Chen, Dian Shi, Xiaoqi Qin, Dongjie Liu, Miao Pan, and Shuguang Cui. Service delay minimization for federated learning over mobile devices. *IEEE Journal on Selected Areas in Communications*, 41(4):990–1006, 2023. Travers Ching, Daniel S Himmelstein, Brett K Beaulieu-Jones, Alexandr A Kalinin, Brian T Do, Gregory P Way, Enrico Ferrero, Paul-Michael Agapow, Michael Zietz, Michael M Hoffman, et al. Opportunities and obstacles for deep learning in biology and medicine. *Journal of The Royal Society Interface*, 15(141):20170387, 2018. Pierre Courtiol, Charles Maussion, Matahi Moairi, Elodie Pronier, Samuel Pilcer, Meriem Sefta, Pierre Manceron, Sylvain Toldo, Mikhail Zaslavskiy, Nolwenn Le Stang, et al. Deep learning-based classification of mesothelioma improves prediction of patient outcome. *Nature medicine*, 25(10):1519–1525, 2019. Rong Dai, Li Shen, Fengxiang He, Xinmei Tian, and Dacheng Tao. Dispfl: Towards communication-efficient personalized federated learning via decentralized sparse training. *arXiv preprint arXiv:2206.00187*, 2022. GDPR. General data protection regulation, 2016. URL https://gdprinfo.eu/. Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, and Viveck Cadambe. Local sgd with periodic averaging: Tighter analysis and adaptive synchronization. *Advances in Neural Information Processing Systems*, 32, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. Robert Hönig, Yiren Zhao, and Robert Mullins. Dadaquant: Doubly-adaptive quantization for communication-efficient federated learning. In *International Conference on Machine Learning*, pp. 8852–8866. PMLR, 2022. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. *arXiv preprint arXiv:1704.04861*, 2017. Ahmed Imteaj, Urmish Thakker, Shiqiang Wang, Jian Li, and M Hadi Amini. A survey on federated learning for resource-constrained iot devices. *IEEE Internet of Things Journal*, 9(1):1–24, 2021. Steven N Kaplan. Campeau’s acquisition of federated: Value destroyed or value added. *Journal of Financial Economics*, 25(2):191–212, 1989. Alex Krizhevsky, Geoffrey Hinton, et al. *Learning multiple layers of features from tiny images*. Toronto, ON, Canada, 2009. Yijing Li, Xiaofeng Tao, Xuefei Zhang, Junjie Liu, and Jin Xu. Privacy-preserved federated learning for autonomous driving. *IEEE Transactions on Intelligent Transportation Systems*, 23:8423–8434, 2021.
ulMXGO1fdH
- [Introduction, 1st paragraph] The authors say that “However, BPTT is unsuitable for online learning (Kaiser et al., 2020; Bohnstingl et al., 2022)”. Throughout the paper, I see them using gradients from BPTT as the ground truth for their approach. Can you explain this discrepancy?
ESTIMATING POST-SYNAPTIC EFFECTS FOR ONLINE TRAINING OF FEED-FORWARD SNNs Anonymous authors Paper under double-blind review ABSTRACT Facilitating online learning in spiking neural networks (SNNs) is a key step in developing event-based models that can adapt to changing environments and learn from continuous data streams in real-time. Although forward-mode differentiation enables online learning, its computational requirements restrict scalability. This is typically addressed through approximations that limit learning in deep models. In this study, we propose Online Training with Postsynaptic Estimates (OTPE) for training feed-forward SNNs, which approximates Real-Time Recurrent Learning (RTRL) by incorporating temporal dynamics not captured by current approximations, such as Online Training Through Time (OTTT) and Online Spatio-Temporal Learning (OSTL). We show improved scaling for multi-layer networks using a novel approximation of temporal effects on the subsequent layer’s activity. This approximation incurs minimal overhead in the time and space complexity compared to similar algorithms, and the calculation of temporal effects remains local to each layer. We characterize the learning performance of our proposed algorithms on multiple SNN model configurations for rate-based and time-based encoding. OTPE exhibits the highest directional alignment to exact gradients, calculated with backpropagation through time (BPTT), in deep networks and, on time-based encoding, outperforms other approximate methods. We also observe sizeable gains in average performance over similar algorithms in offline training of Spiking Heidelberg Digits with equivalent hyper-parameters (OTTT/OSTL – 70.5%; OTPE – 75.2%; BPTT – 78.1%). 1 INTRODUCTION Spiking neural networks (SNNs) promise a path toward energy-efficient machine intelligence for streaming data [Yik et al., 2023]. Despite this potential, efficiently training them on temporal sequences remains a challenge. Backpropagation through time (BPTT) applied with surrogate gradient estimates of SNN neurons [Nefci et al., 2019; Zenke and Ganguli, 2018; Shrestha and Orchard, 2018] has become the dominant method to train SNNs. However, BPTT is unsuitable for online learning [Kaiser et al., 2020; Bohningtigl et al., 2022; Rostami et al., 2022]. Real-time recurrent learning (RTRL) [Williams and Zipser, 1989] computes exact gradients for stateful models without temporal unrolling. This enables online learning at the cost of $O(n^3)$ storage and $O(n^4)$ compute, which is not practical for training all but the smallest networks. To address these limitations, practical implementations approximate RTRL by adopting low-rank matrix approximations [Mujika et al., 2018; Benzing et al., 2019], leveraging stochastic gradient estimates [Tallec and Ollivier, 2018], incorporating model-specific assumptions [Bohningtigl et al., 2022], or selectively omitting certain influence pathways [Menick et al., 2020]. Of these, a promising approach for SNN training involves storing only the influence matrix to gradient values along the diagonal of the stored Jacobians. This approximation restricts temporal influence of the gradient calculations to the current output from a single layer, which neglects how the previous outputs of a neuron impact the membrane potential of downstream neurons. This approximation reduces learning performance when compared to the exact gradient calculations in RTRL or BPTT [Bohningtigl et al., 2022], due to untracked causal effects over time. To address this, we develop a novel approximation of this temporal effect through our algorithm, Online Training with Postsynaptic Estimates (OTPE). OTPE maintains a trace of parameter influence over multiple time-steps, implementing a comprehensive approximation of the entire temporal effect in a one-hidden-layer model. In deep networks, spikes Figure 1: Depiction of RTRL and OTPE. The grey dotted lines indicate the forward pass effects through a feed-forward spiking neural network. (a) A graphical depiction of computations incurred in RTRL. Here, the Jacobian ($J$) matrix is calculated for all the upstream parameters in the model. Thus, we represent this by gradients in cubes, reflecting the $n^3$ size of the Jacobian. (b) Depiction of OTPE. Unlike RTRL, all temporal gradient calculations are layer-local, and loss is backpropagated. Generated in earlier layers will have a delayed influence on spiking activity of neurons in deeper layers. Thus, gradient approximation worsens as the error is back-propagated if only the current time-step is considered. Intuitively, the performance difference between Online Spatio-Temporal Learning (OSTL) (Bohnstingl et al., 2022) and BPTT reflects the impact of these residual temporal effects since the explicit difference between OSTL and exact gradient calculation is the exclusion of these residual effects. We demonstrate that OTPE’s approximation significantly increases alignment with exact gradients. We observe a $\sim 70\%$ increase in gradient cosine similarity to BPTT, in the first hidden layer and $\sim 50\%$ in the second hidden layer of a 2-hidden-layer network for a model trained on the Spiking Heidelberg Digits (SHD) dataset (Cramer et al., 2020). We consistently observe similar improvements for both online and offline learning, and across other evaluated datasets and model configurations. Our primary contributions include: - a novel approximation of RTRL, OTPE, that captures the effects of multi-step temporal sequences through a spiking neuron which are excluded from previous algorithms; - a further relaxation of our algorithm that can achieve similar scalability to state-of-the-art while delivering superior learning performance on multiple tasks; - in-depth evaluations of OTPE against existing SNN training algorithms, including gradient approximation quality and learning performance in temporal and rate-encoded inputs. 2 BACKGROUND Training stateful networks such as SNNs requires temporal credit assignment (Maeda and Wakamura, 2005; Lillicrap et al., 2016; Bohnstingl et al., 2022; Xiao et al., 2022; Kaiser et al., 2020). Of these different methods, state-of-the-art results in SNN training typically employ BPTT-based credit assignment (Eshraghian et al., 2023) with surrogate derivatives to calculate gradients through the SNNs discontinuity (Zenke and Vogels, 2021; Zenke and Ganguli, 2018). Consequently, we compare the gradient approximation quality of OTPE, Online Training Through Time (OTTT) (Xiao et al., 2022), and OSTL against the exact gradients computed by BPTT. All methods are applied to deep feed-forward SNNs composed of leaky integrate-and-fire (LIF) neurons in fully connected layers. ### 2.1 LIF neuron SNNs promise low computational requirements, arising from activation sparsity and unary outputs. LIF neurons are the most commonly used for balancing performance and complexity, especially in deep models (Zenke and Ganguli, 2018). Similar to a plethora of other work (Fang et al., 2021), we use a subtraction-based reset formulation of the LIF, with the neuron reset behavior written as \[ \text{Reset} : \quad U^l_t = \lambda U^l_{t-1} - V_{th} \cdot s^l_t, \quad s^l_t = H(\lambda U^l_{t-1} + s^{l-1}_t \cdot \theta - V_{th}). \] Here, \(U^l_t\) is the neuron’s membrane potential in layer \(l\) at time-step \(t\), which decays by the leak \(\lambda\) while accumulating spiking inputs \(s^{l-1}_t\) from the previous layer, weighted by \(\theta\). The neuron emits a spike \(s^l_t\) whenever its membrane potential exceeds the threshold \(V_{th}\). The derivative of the Heaviside step function (\(H\)) is the Dirac delta function which is zero almost everywhere, effectively setting all gradients to zero. To generate non-zero gradients, we employ surrogate gradients, which replace the dirac delta function with the derivative of the fast sigmoid function (Neftci et al., 2019). ### 2.2 RTRL Real-Time Recurrent Learning (RTRL) (Fig. 1) calculates gradients through time using forward-mode differentiation, calculating and storing Jacobian-vector products. While BPTT must store outputs at each layer and unroll the network to perform reverse-mode differentiation through time, RTRL stores and updates each parameter’s effects on the network’s state. Because the stored Jacobian tracks every parameter’s influence on each state variable (\(O(n^3)\) in the number of parameters) the network avoids unrolling to calculate gradients. Due to this, RTRL can calculate exact gradients for online learning. RTRL gradient calculation for the output layer of an SNN can be written as \[ \frac{\partial L}{\partial \theta^l} = \sum_{t=1}^{T} \frac{\partial L_t}{\partial s^l_t} \frac{\partial s^l_t}{\partial U^l_t} \frac{\partial U^l_t}{\partial \theta^l} \] \[ = \sum_{t=1}^{T} \frac{\partial L_t}{\partial s^l_t} \frac{\partial s^l_t}{\partial U^l_t} \left( \frac{\partial U^l_t}{\partial \theta^l} + \frac{\partial U^l_t}{\partial U^l_{t-1}} \frac{\partial U^l_{t-1}}{\partial \theta^l} \right). \] We denote the loss with \(L\), the spike output with \(s\), the membrane potential with \(U\), the parameters with \(\theta\), the total number of time-steps with \(T\), and the current time-step with \(t\). We can recursively calculate and store the temporal gradients, \(\frac{\partial U^l_t}{\partial \theta^l}\), in eqn (1) through \(\frac{\partial U^l_t}{\partial \theta^l} = \left( \frac{\partial U^l_t}{\partial \theta^l} + \frac{\partial U^l_t}{\partial U^l_{t-1}} \frac{\partial U^l_{t-1}}{\partial \theta^l} \right)\). For the hidden layer, we can expand eqn (1), substituting \(\frac{\partial U^l_t}{\partial \theta^l}\) with \(\frac{\partial U^l_t}{\partial \theta^l_{t-1}}\), resulting in \[ \frac{\partial U^l_t}{\partial \theta^l_{t-1}} = \frac{\partial U^l_t}{\partial s^l_{t-1}} \frac{\partial s^l_{t-1}}{\partial \theta^l_{t-1}} \] \[ = \frac{\partial U^l_t}{\partial s^l_{t-1}} \left( \frac{\partial s^l_{t-1}}{\partial U^l_{t-1}} \left( \frac{\partial U^l_{t-1}}{\partial \theta^l_{t-1}} + \frac{\partial U^l_{t-1}}{\partial U^l_{t-2}} \frac{\partial U^l_{t-2}}{\partial \theta^l_{t-1}} \right) \right). \] Where, the kernel’s transpose in a dense layer, \(\theta^T\), is given by \(\frac{\partial U^l_t}{\partial s^l_t}\). ### 2.3 Practical Approaches to RTRL Prior work achieves online capabilities by approximating exact gradient computation. Of particular interest are OSTL (Bohnstingl et al., 2022) and OTTT (Xiao et al., 2022). OSTL focuses on achieving bio-plausibility by implementing eligibility traces (Gerstner et al., 2018), through a mechanism derived from RTRL. Another similar algorithm is OTTT, which ignores the reset mechanism in the gradient calculation of SNNs to reduce the storage and compute overhead relative to OSTL. See Table 1 for the time and space complexity of the evaluated algorithms. 2.3.1 OSTL OSTL is an RTRL approximation that separates spatial and temporal gradients for calculating the overall gradients. Because \( \frac{\partial U_t}{\partial \theta} \) in RTRL has \( n^3 \) entries (where \( n \) is the layer size), the storage demand of RTRL limits scalability. OSTL approximates \( \frac{\partial U_t}{\partial \theta} \) by assuming that all nonzero elements are along the diagonal, thus reducing its size to \( n^2 \). This assumption holds for a feed-forward SNN, achieving exact gradient computation in a network without hidden layers. In order to train a deep network, OSTL backpropagates a spatial gradient through the current time-step (\( t \)) and combines this with a temporal gradient, which maintains how a layer’s parameters influenced its most recent output, \( s_t \). While all temporal dynamics concerning a single layer’s influence on \( s_t \) are accounted for, the influence of any spiking activity at previous time-steps is excluded, as shown \[ \frac{\partial L}{\partial \theta^{l-1}} = \sum_{t=1}^{T} \frac{\partial L_t}{\partial s_t} \frac{\partial s_t}{\partial U_t} \frac{\partial U_t}{\partial \theta^{l-1}} = \sum_{t=1}^{T} \frac{\partial L_t}{\partial s_t} \frac{\partial s_t}{\partial U_t} \left( \frac{\partial U_t}{\partial \theta^{l-1}} + \frac{\partial U_t}{\partial \theta^{l-1}} \right). \] 2.3.2 OTTT OTTT is conceptually similar to OSTL. Since \( \frac{\partial s_t}{\partial U_t} \) is the Dirac delta function, a surrogate derivative normally takes its place, but OTTT chooses only to apply the surrogate derivative when calculating the spatial gradient and refrains from doing so when updating the temporal gradient. In other words, the gradient calculation from BPTT which uses \( s_t^{l-1} \) can be substituted in for the calculation in RTRL for \( \frac{\partial U_t}{\partial \theta} \). Since the derivative of the Heaviside step function is zero almost everywhere, the temporal gradient in a subtraction-based LIF neuron simplifies to the summation over time of the product between the time-weighted leak \( \lambda^{T-t} \) and \( s_t^{l-1} \). Consequently, only a running weighted sum of the input, \( \hat{a} \) in Xiao et al. (2022), is required to be stored and updated, reducing the space complexity of OTTT to \( O(n) \). While the surrogate derivative is normally applied as \( \frac{\partial U_t}{\partial \theta^{l-1}} = \lambda + \frac{\partial U_t}{\partial s_t} \frac{\partial s_t}{\partial \theta^{l-1}} \), OTTT uses the Heaviside function instead, such that \( \frac{\partial U_t}{\partial \theta^{l-1}} = \lambda + \frac{\partial U_t}{\partial s_t} \cdot \emptyset \), which is simply \( \lambda \). Consequently, OTTT’s temporal gradient, \( \hat{a} \), can be calculated as \( \hat{a}_{t=T} = \sum_{t=1}^{T} \lambda^{T-t} s_t^{l-1} \). 3 POST-SYNAPTIC ESTIMATION FOR SNNs In a feed-forward SNN with hidden layers, temporal dynamics that influence subsequent layers in the network are not addressed by current approximations. We achieve a space-efficient method of approximating these gradients in a similar fashion to OTTT. Specifically, we do not apply the surrogate derivative for \( \frac{\partial s_t^{l+1}}{\partial U_t^{l+1}} \) when calculating \( \frac{\partial U_t^{l+1}}{\partial \theta^{l+1}} \) during the calculation of the temporal dynamics in layer \( l \). We also assume the time constant is global. During forward gradient computation in layer \( l \), we assume the subsequent layer’s temporal dynamics are captured by the running sum of the subsequent layer’s inputs, which are the output spikes of layer \( l \). By maintaining a running weighted sum of how the parameters in layer \( l \) influenced previous spikes, the parameters’ contribution to the subsequent layer’s temporal dynamics is represented during gradient calculation. In order to calculate how the parameters affect spikes at the current time-step, OTPE implements OSTL to produce the diagonal Jacobian matrix, \( \frac{\partial s_t^l}{\partial \theta^l} \) at each time-step. OTPE selectively applies the surrogate derivatives such that \[ \frac{\partial L}{\partial \theta^{l-1}} = \sum_{t=1}^{T} \frac{\partial L_t}{\partial s_t} \frac{\partial s_t}{\partial U_t} \frac{\partial U_t}{\partial \theta^{l-1}} = \sum_{t=1}^{T} \frac{\partial L_t}{\partial s_t} \frac{\partial s_t}{\partial U_t} \left( \frac{\partial U_t}{\partial \theta^{l-1}} + \lambda \cdot \frac{\partial U_t}{\partial \theta^{l-1}} \right). \] Since \( \frac{\partial U_t}{\partial s_t^{l-1}} \) in eqn [2] is the kernel’s transpose, \( \theta^T \), we avoid recursion, and can write \[ \frac{\partial U_t}{\partial \theta^{l-1}} = \theta^T \sum_{t=1}^{T} \lambda^{T-t} \frac{\partial s_t^{l-1}}{\partial \theta^{l-1}} = \theta^T \hat{R}^{l-1}. \] | Name | Space Complexity | Time Complexity | |----------|------------------|-----------------| | BPTT | $O(Tn)$ | $O(Tn^2)$ | | OTTT | $O(n)$ | $O(Tn + n^2)$ | | OSTL | $O(n^2)$ | $O(Tn^2)$ | | Approx OTPE | $O(n)$ | $O(Tn + n^2)$ | | OTPE | $O(n^2)$ | $O(Tn^2)$ | Table 1: Time and space complexity of BPTT and all tested approximate algorithms for calculating gradients at time-step $T$. The complexity is in reference to a single dense layer with an input and output size $n$ and batch size of 1. As seen in eqn (4), similar to OSTL, we only need to maintain a running weighted sum of $\frac{\partial s_{t-1}}{\partial \theta^{l-1}}$, which we denote as $\hat{R}$. OTPE differs from OSTL by approximating the term OSTL ignores, which is the temporal gradient from post-synaptic neurons. Q3: Because $\hat{R}_{t=0} = \frac{\partial U_t}{\partial \theta^{l-1}}$ and updates as $$\hat{R}_t = \frac{\partial U_t}{\partial \theta^{l-1}} + \lambda \cdot \hat{R}_{t-1},$$ $\hat{R}$ is the same shape as $\frac{\partial U_t}{\partial \theta^{l-1}}$, which is $n^2$ due to OSTL’s sparse assumption. $$\frac{\partial L_t}{\partial U_t} \approx \lambda \cdot \theta^{lT} \hat{R}^{l-1}, \quad \frac{\partial L_t}{\partial \theta^{l-1}} \approx \sum_{t=1}^{T} \frac{\partial L_t}{\partial s_t} \frac{\partial s_t}{\partial U_t} \left( \frac{\partial U_t}{\partial \theta^{l-1}} + \lambda \cdot \theta^{lT} \hat{R}^{l-1} \right).$$ ### 3.1 Approximate OTPE Although OTPE achieves scalability comparable to OSTL, OTTT remains more scalable. To combine the benefits of OTPE’s temporal information with $O(n)$ space complexity, we make two assumptions on top of OTPE: 1) We decouple the components of $\hat{R}$, $\frac{\partial s_t}{\partial \theta^{l}}$ and $\frac{\partial U_t}{\partial \theta^{l}}$, through time by assuming $$\sum_{t=1}^{T} \lambda^{T-t} \frac{\partial s_t}{\partial \theta^{l}} \frac{\partial U_t}{\partial \theta^{l}} \approx \left( \frac{1}{T} \sum_{t=1}^{T} \lambda^{T-t} \frac{\partial s_t}{\partial \theta^{l}} \right) \cdot \left( \sum_{t=1}^{T} \frac{\partial U_t}{\partial \theta^{l}} \right).$$ We maintain a size $n$ running weighted average of the surrogate gradients through time, which we refer to as $\bar{g} = \frac{1}{T} \sum_{t=1}^{T} \lambda^{T-t} \frac{\partial s_t}{\partial \theta^{l}}$. 2) We approximate our layer-local temporal calculation in OTPE to only store vectors of size $n$ by assuming the same temporal dynamics as OTTT for a single layer, $\frac{\partial U_t}{\partial \theta^{l}} \approx \hat{a}$, such that $\frac{\partial U_t}{\partial \theta^{l-1}} \approx \theta^{lT} \cdot \bar{g} (\hat{a}^{l-1} + \lambda \cdot \hat{z}^{l-1})$, where $\hat{z}$ is a weighted sum of OTTT’s weighted sum ($\hat{z}^{l-1} = \hat{a}^{l-1} + \lambda \cdot \hat{z}^{l-1}$). When the spatial gradient reaches a layer, $\bar{g}$ is used in place of the immediate time-step’s surrogate. The kernel gradients are calculated by taking the outer product of the back-propagated loss and $\hat{z}$. When considering networks with multiple hidden layers, backpropagating error for OTPE or Approx OTPE is similar to OSTL and OTTT, with the exception of using $\bar{g}$ instead of the most recent time-step’s surrogate gradients. $$\frac{\partial L_T}{\partial \theta^{l-1}} \approx \left( \frac{\partial L_T}{\partial s_T} \frac{\partial s_T}{U_T} \cdot \theta^{lT} \right) \cdot \bar{g} \otimes \hat{z}. \quad (5)$$ ### 3.2 F-OTPE OTPE calculates gradients under the assumption that output spikes of a hidden layer at previous time-steps are accumulated with membrane leak in the following layer ($\hat{R}^l = \frac{\partial \sum_{t=1}^{T} \lambda^{T-t} s_t}{\partial \theta^{l}}$). In the output layer, this can apply to the leaked accumulation of spikes through time. Suppose we calculate loss using the accumulated spikes instead of solely relying on the spiking behavior at the current time-step. In this case, we can use $\hat{R}$ to exactly calculate the derivative of the loss with respect to the output layer’s parameters ($\frac{\partial L}{\partial \sum_{t=1}^{T} \lambda^{T-t} s_t} = \frac{\partial L}{\partial \sum_{t=1}^{T} \lambda^{T-t} \hat{R}^l} = \frac{\partial L}{\partial \theta^{l}}$). We test the application of OTPE to all layers in the network for online learning, applying softmax cross-entropy loss to a leaking sum of the model’s output. We also do this for its approximation, denoted by “F-.”. Figure 2: Mean validation accuracy across four seeds throughout offline training for (a) R-Randman, (b) T-Randman, and (c) SHD respectively. Shaded regions indicate one standard deviation. 4 EXPERIMENTS We test and compare all algorithms for online and offline training on Randman (Zenke and Vogels, 2021), a synthetic spiking dataset, and SHD (Cramer et al., 2020) (dataset related parameters provided in Appendix A.1). In offline learning, we evaluate accuracy, cosine similarity to exact gradients produced via BPTT, and training trajectories in the loss landscape. In offline training on SHD, we evaluate test accuracy across multiple model configurations to identify suitable hyperparameters for comparing all algorithms. For online training on SHD, we run a learning rate search across three model configurations for the same purpose (see Appendix A.2 for full hyperparameter search results). We exclude f-OTPE from offline learning comparison because loss is calculated differently, preventing a fair comparison of gradient approximation quality. All tests were performed on NVIDIA GPUs, using the Adamax optimizer (Kingma and Ba, 2014). We chose the Adamax optimizer because of its relative effectiveness on temporal learning and use in the original SHD study. We provide JAX code to enable the reproduction of our results (Bradbury et al., 2018; Heek et al., 2023). For all tests, we use a fast sigmoid slope of 25, the default value in snnTorch (Eshraghian et al., 2023). BPTT also delivered highest accuracy with this value in our hyperparameter search (Appendix A.2). Randman is a synthetic dataset, aimed at studying SNN capabilities in learning spike timing patterns. Spike-times are generated on a random smooth manifold, projected onto a high-dimensional hypercube. These are then sampled to generate a spike-train, which the SNN learns to classify. The dimensionality and smoothness of the manifold is varied to adjust the task difficulty. Neurons, firing once per trial, only contain temporal information, a format we term T-Randman. We also modify Randman to test rate-based learning through R-Randman. Here, spike-rates are determined by manifold values and are generated to be temporally uncorrelated through random shuffling. Spiking Heidelberg Digits (SHD) is a spiking dataset consisting of 10,000 recordings of spoken digits 0-9 in German and English, across 20 classes. The recordings are processed through a model of the inner ear to produce a spiking representation. Since Randman’s structure does not reflect natural data, we also evaluate model performance on SHD to reflect performance for a practical application. We evaluate accuracy after both online and offline training for SHD. 4.1 EVALUATING LEARNING PERFORMANCE Figure 2 compares OTPE and Approx OTPE against BPTT, OTTT, and OSTL for offline training. We compare performance across multiple datasets (R-Randman, T-Randman, SHD). We train a 2-hidden-layer model with a layer width of 128 for both variations of Randman. Performance on R-Randman is similar across methods (the range of their means is 1.3%), and both OSTL and OTTT beat BPTT, OTPE, and Approx-OTPE. However, when evaluated on T-Randman, OTPE and Approx-OTPE accuracy is on-par with BPTT (OTPE’s smoothed validation accuracy on SHD is ~ 0.8% lower, ~ 0.7% higher on T-Randman, and ~ 0.1% higher on R-Randman) while OSTL (~ 7.4% lower on SHD, ~ 3.8% lower on T-Randman, and ~ 1.1% higher on R-Randman) and OTTT (~ 9.3% Figure 3: Mean gradient cosine similarity throughout training for each layer, evaluated on (a) R-Randman, (b) T-Randman, and (c) SHD. The output layer (O), has a higher cosine similarity than the earlier layers hidden-1 (H1) and -2 (H2) due to accumulating approximation error during backprop. Figure 4: Model-wide, mean gradient cosine similarity over training duration for (a) R-Randman, (b) T-Randman, and (c) SHD. Shaded regions indicate one standard deviation. lower on SHD, \( \sim 8.6\% \) lower on T-Randman, and \( \sim 0.7\% \) higher on R-Randman) underperform. All reported numbers are averages over multiple seeds, with SHD results also reported for the last 1000 minibatches. On SHD, Approx OTPE achieves 5.2\% lower validation accuracy than BPTT. This is \( \sim 2.2\% \) higher than OSTL’s validation accuracy and \( \sim 4\% \) higher than OTTT. We provide training loss in A.2 and A.3. Figure 3 shows layer-wise gradient cosine similarity between the gradient estimates of each algorithm (OTPE, Approx OTPE, OTTT, and OSTL), and BPTT for offline learning. We evaluate similarity across the three datasets for 2-hidden-layer networks with 128 layer width in both Randman configurations and a 2-hidden-layer network with 512 layer width. BPTT’s gradients are identical to the gradients in the output layers of OSTL and OTPE due to their exact formulation. OTTT and Approximate OTPE achieve an average \( \geq 0.99 \) cosine similarity in the output layer. As expected, the gradient similarity with BPTT decreases for deeper layers across all algorithms. OSTL and OTTT incur a \( \geq 40\% \) reduction in alignment with BPTT in the second hidden layer (H2), which further decreases to 60\% in the first hidden layer (H1). However, OTPE and Approx OTPE are consistently better aligned with those generated by BPTT, with Approx OTPE above 0.6 in H1 for T-Randman compared to \( \leq 0.2 \) for OSTL and OTTT. We consistently observe these trends in layerwise gradient alignment across the evaluated datasets. Additionally, when observed over the training duration, we see consistent results for layerwise gradient-similarity. The output layers for OTTT and OTPE show improved gradient alignment with BPTT over time (see Appendix A.3). OTPE shows this increasing alignment with BPTT’s gradient directions for the last hidden layer, whereas this trend reverses for OTTT. These trends are consistent across seeds, which may allow the slope of the trends to influence the standard deviation reported in Fig. 3. Figure 5: Evaluations of offline learning through the loss landscapes of the different algorithms for (a) R-Randman and (b) T-Randman, evaluated over the validation set. Evaluations are conducted every 800 mini-batches of training with BPTT’s model. We observe high similarity between BPTT’s model and those trained by OTPE and Approx OTPE in the model weight-space. Figure 6: Mean validation accuracy across four seeds for online training evaluated on (a) R-Randman, (b) T-Randman, and (c) SHD. Shaded regions indicate one standard deviation across seeds. As shown in Fig.4, OTPE consistently achieves the highest model-wise gradient cosine similarity with BPTT over the training duration. When evaluated on T-Randman, OTTT and Approximate OTPE are less aligned with BPTT early in the training (0.6 initially) before stabilizing above 0.9. For SHD and R-Randman, OTPE and its approximation consistently achieve a higher alignment (average of 0.99 and 0.97, respectively) than OSTL and OTTT (average of 0.96 and 0.95). We visualize the training loss landscape for different algorithms on the R-Randman and T-Randman tasks (see Figure 5(a) and (b)), as described in Li et al. (2018). The loss contours are determined by picking a centre point ($\theta^*$) and two direction vectors (the initial ($\delta$) and final ($\nu$) BPTT model) and then plotting $f(\alpha, \beta) = L(\theta^* + \alpha \delta + \beta \nu)$. The trajectories are rendered by mapping a model, every 800 minibatches during training, into the loss landscape. The strong alignment between BPTT, OTPE, and Approx. OTPE for R-Randman indicates a remarkable similarity between the trained models. For T-Randman, the models trained using OTPE and Approx. OTPE remain more closely aligned to BPTT compared to those trained using OSTL and OTTT. Across these different datasets, the training trajectories of OSTL and OTTT diverge the most from the other models, indicating a significantly altered trained model configuration compared to BPTT. Table 2 summarizes our performance results for SHD evaluated with different model configurations. We found that OTPE and its approximation perform best with five layers while OTTT, OSTL, and BPTT perform best with three layers. The performance difference between the three-layer and five-layer models is within one standard deviation for BPTT. In contrast OTTT and OSTL incur a substantial drop of 5% and 3.4% in test accuracy after online training, respectively, when training a five-layer model. BPTT unsurprisingly dominates test performance on SHD, but OTPE consistently outperforms OTTT and OSTL. We additionally observe high performance for OTPE across different model and training hyperparameters (data in Appendix A.2). This finding is consistent with Bauer | Name | Depth | Width | Offline Acc ±σ | Online Acc ±σ | |----------|-------|-------|----------------|---------------| | OTTT | 3 | 128 | 64.3% ± 1.0 | 66.7% ± 1.1 | | OSTL | 3 | 128 | 66.7% ± 1.0 | 68.5% ± 1.9 | | BPTT | 3 | 128 | **73.9% ± 2.5**| N/A | | OTPE | 3 | 128 | 72.5% ± 0.7 | **73.6% ± 1.4**| | A. OTPE | 3 | 128 | 67.0% ± 1.0 | 69.8% ± 1.2 | | F-OTPE | 3 | 128 | N/A | **73.3% ± 0.9**| | F-A. OTPE| 3 | 128 | N/A | 71.2% ± 1.7 | | OTTT | 3 | 512 | 70.5% ± 1.5 | 71.2% ± 0.8 | | OSTL | 3 | 512 | 70.5% ± 1.5 | 70.6% ± 0.7 | | BPTT | 3 | 512 | **78.1% ± 1.0**| N/A | | OTPE | 3 | 512 | 75.2% ± 0.5 | **75.4% ± 0.5**| | A. OTPE | 3 | 512 | 71.2% ± 1.1 | 71.8% ± 1.1 | | F-OTPE | 3 | 512 | N/A | **75.3% ± 0.3**| | F-A. OTPE| 3 | 512 | N/A | 71.5% ± 1.2 | | OTTT | 5 | 512 | 63.5% ± 2.1 | 66.2% ± 0.7 | | OSTL | 5 | 512 | 63.9% ± 1.0 | 67.2% ± 1.0 | | BPTT | 5 | 512 | **77.7% ± 0.6**| N/A | | OTPE | 5 | 512 | 76.4% ± 1.2 | **76.7% ± 0.7**| | A. OTPE | 5 | 512 | 74.3% ± 0.9 | **76.7% ± 0.8**| | F-OTPE | 5 | 512 | N/A | 74.1% ± 1.0 | | F-A. OTPE| 5 | 512 | N/A | 72.6% ± 1.3 | Table 2: Summary of test accuracy on SHD for different model configurations. Results for online learning report highest accuracy determined after learning rate search (Appendix A.2). et al. (2023), where the approximate gradient calculation of Shrestha and Orchard (2018) is more sensitive to the surrogate derivative slope than exact gradient calculation. Results comparing online learning performance across the different algorithms are shown in Fig. 6. Online training performance on R-Randman is consistent with those observed for offline training, with all approaches delivering similar accuracy (see training loss in Appendix A.3). The performance difference across OTPE, its approximations, OSTL, and OTTT is more apparent for T-Randman and SHD. Although we observed higher test accuracy for online learning on SHD than for offline learning, we attribute this to our hyperparameter search (see Appendix A.2). As seen in Fig. 6(c), when evaluated on SHD, we observe that both F-OTPE and F-Approx OTPE converge in performance earlier than OTPE. F-OTPE reaches its peak average performance on SHD after 4,500 mini-batches of online training. Meanwhile, OTPE does not appear to have reached peak validation accuracy even after training on 10,000 mini-batches. We also observe a substantial performance benefit of F-OTPE on the Spiking Speech Commands (Cramer et al., 2020) dataset in Appendix A.4. 5 DISCUSSION AND CONCLUSIONS We propose OTPE and its approximations to facilitate efficient online training in SNNs, by capturing temporal effects typically omitted by similar algorithms. While maintaining similar scalability to OSTL in both compute and memory costs, OTPE produces superior results while remaining layer-local. OTPE and its approximation demonstrate greater alignment to exact gradients in the hidden layers, which may be more beneficial in tasks requiring greater network depth. The training trajectories in the loss landscape consistently demonstrate that models trained with OTPE or its optimizations are closer in model weight-space to BPTT models than those trained using OTTT and OSTL. We evaluated our algorithms on SHD and on rate- and temporal-variations of the Randman task (R-Randman and T-Randman). We observe similar performance across all algorithms when evaluated on R-Randman. However, the temporal influence approximations used by OSTL and OTTT result in degraded performance on temporal tasks like T-Randman and SHD, with OTPE and its variants consistently outperforming them across model configurations and datasets. 6 REPRODUCIBILITY The code for our experiments can be found in the attached supplementary materials. The `readme.txt` file outlines the scripts for generating data and plots. REFERENCES F. C. Bauer, G. Lenz, S. Haghighatshoar, and S. Sheik. Exodus: Stable and efficient training of spiking neural networks. *Frontiers in Neuroscience*, 17:1110444, 2023. F. Benzing, M. M. Gauy, A. Mujika, A. Martinsson, and A. Steger. Optimal kronecker-sum approximation of real time recurrent learning. In *International Conference on Machine Learning*, pages 604–613. PMLR, 2019. T. Bohnstingl, S. Woźniak, A. Pantazi, and E. Eleftheriou. Online spatio-temporal learning in deep neural networks. *IEEE Transactions on Neural Networks and Learning Systems*, 2022. J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. B. Cramer, Y. Stradmann, J. Schemmel, and F. Zenke. The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. *IEEE Transactions on Neural Networks and Learning Systems*, 33(7):2744–2757, 2020. J. K. Eshraghian, M. Ward, E. O. Neftci, X. Wang, G. Lenz, G. Dwivedi, M. Bennamoun, D. S. Jeong, and W. D. Lu. Training spiking neural networks using lessons from deep learning. *Proceedings of the IEEE*, 2023. W. Fang, Z. Yu, Y. Chen, T. Masquelier, T. Huang, and Y. Tian. Incorporating learnable membrane time constant to enhance learning of spiking neural networks. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pages 2661–2671, 2021. W. Gerstner, M. Lehmann, V. Liakoni, D. Corneil, and J. Brea. Eligibility traces and plasticity on behavioral time scales: experimental support of neohebbian three-factor learning rules. *Frontiers in neural circuits*, 12:53, 2018. J. Heek, A. Levskaya, A. Oliver, M. Ritter, B. Rondepierre, A. Steiner, and M. van Zee. Flax: A neural network library and ecosystem for JAX, 2023. URL http://github.com/google/flax. J. Kaiser, H. Mostafa, and E. Neftci. Synaptic plasticity dynamics for deep continuous local learning (decolle). *Frontiers in Neuroscience*, 14, 2020. ISSN 1662-453X. doi: 10.3389/fnins.2020.00424. URL https://www.frontiersin.org/articles/10.3389/fnins.2020.00424. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein. Visualizing the loss landscape of neural nets. *Advances in neural information processing systems*, 31, 2018. T. P. Lillicrap, D. Cownden, D. B. Tweed, and C. J. Akerman. Random synaptic feedback weights support error backpropagation for deep learning. *Nature communications*, 7(1):13276, 2016. Y. Maeda and M. Wakamura. Simultaneous perturbation learning rule for recurrent neural networks and its fpga implementation. *IEEE Transactions on Neural Networks*, 16(6):1664–1672, 2005. J. Menick, E. Elsen, U. Evci, S. Osindero, K. Simonyan, and A. Graves. A practical sparse approximation for real time recurrent learning. *arXiv preprint arXiv:2006.07232*, 2020. A. Mujika, F. Meier, and A. Steger. Approximating real-time recurrent learning with random kronecker factors. *Advances in Neural Information Processing Systems*, 31, 2018. E. O. Neftci, H. Mostafa, and F. Zenke. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. *IEEE Signal Processing Magazine*, 36(6):51–63, 2019. A. Rostami, B. Vogginger, Y. Yan, and C. G. Mayr. E-prop on spinnaker 2: Exploring online learning in spiking rnns on neuromorphic hardware. *Frontiers in Neuroscience*, 16:1018006, 2022. S. B. Shrestha and G. Orchard. Slayer: Spike layer error reassignment in time. *Advances in neural information processing systems*, 31, 2018.
miGpIhquyB
It is easy to imagine a *faithful* dataset on which this classifier will perform poorly, due to issues like style shift or poor generalization of the classifier. To be more concise: staking faithfulness on the accuracy of a classifier ignores the fact that this may be an issue of the classifier rather than the dataset that is being evaluated.
Understanding Large Language Models Through the Lens of Dataset Generation Anonymous authors Paper under double-blind review Abstract There has been increased interest in using Large Language Models (LLMs) for text dataset generation subject to a desired attribute, e.g., for use in downstream fine-tuning or training. These works generally focus on a single quality metric of the generated text, typically accuracy on a downstream task. However, this fails to consider whether the model even has the ability to faithfully model the data distribution of the desired real-world domain. In contrast, in this work, we additionally focus on important distributional metrics agnostic to the downstream task, such as data diversity and faithfulness. We show that even in simple domains, generated datasets reveal inherent trade-offs between these metrics across models and training regimes. Further, we find that our metrics not only describe the generated dataset, but also capture key aspects of the underlying model. This allows us to characterize the generated datasets, individual models and by comparison the properties of different model families and training paradigms. By focusing on sub-distributions well-represented in the training data of LLMs, we can, for example, show that popular instruction-tuning techniques strongly decrease the LLM’s text generation abilities, with respect to distributional aspects like diversity. 1 Introduction In recent years, large language models (large LMs, LLMs), often called foundation models, have become the state-of-the-art on many NLP tasks and beyond. These models can achieve outstanding performance on many tasks, often without any adaption or only with minimal prompting (Brown et al., 2020; Rae et al., 2021; Chowdhery et al., 2022; Touvron et al., 2023a; Bommasani et al., 2021). Need for Task-Specific Data And Dataset Generation The direct application of LLMs can be effective and has the advantage that users do not have to do any additional training or data collection beforehand. However, in practice, smaller custom models that were trained on task-specific data still outperform LLMs, both in terms of task accuracy and hardware efficiency (Ye et al., 2022a; Hsieh et al., 2023; Gao et al., 2023). Recent work also focuses on fine-tuning LLMs themselves on task-specific data, either via standard training (Hu et al., 2022; 2023; Chen et al., 2021), self-improvement (Bai et al., 2022b; Wang et al., 2022b; Haluptzok et al., 2023; Wang et al., 2022a), reinforcement learning from human feedback (RLHF) (Stiennon et al., 2020; Bai et al., 2022a; Ouyang et al., 2022) or even via in-context learning (Brown et al., 2020). Fundamentally, however, all of these methods again require the construction of task-specific datasets, which can be a cumbersome and expensive. In response to this, recent works explore the use of LLMs themselves to automatically generate such datasets (Ye et al., 2022a; Gao et al., 2023; Ye et al., 2022b; Meng et al., 2022; Schick and Schütze, 2021; Josifoski et al., 2023; Chia et al., 2022; Bonifacio et al., 2022). Here, LLMs are prompted to generate synthetic data for a particular task, which can then be used to train, fine-tune or prompt a model, thereby avoiding the need for manual data collection. Despite promising early results in LLM-based data generation, prior work does not fully explore important distributional characteristics of the resulting synthetic datasets in comparison to real-world data, or how data quality differs across different LLMs and sampling strategies. However, going forward, achieving a better understanding of these factors is very important as it (1) provides insights on the actual data modeling capabilities of different LLMs as they are deployed more widely, and (2) can help inform and improve synthetic dataset generation in general. Metrics For Dataset Quality Providing an in-depth analysis of the quality of generated datasets requires analyzing data from various angles. To address this, we propose a multi-faceted evaluation framework, showcased in Fig. 1. We prompt an LLM with examples or instructions to generate a synthetic dataset, which we then compare to a real-world reference dataset, for a wide range of different metrics. Most prior work only uses performance on a downstream task, often classification, as the fundamental metric to characterize synthetically generated datasets. While task performance is important, it does not necessarily transfer to other tasks and does not allow for an effective comparison between models. To account for this, and inspired by [Ye et al., 2022a,b; Gao et al., 2021b], our framework goes beyond just task performance and relies on four extra characteristics that encompass further aspects of dataset quality: As included in Fig. 1, we examine complexity ( Complexity), i.e., how complex or simple the synthetic dataset is based on classifier performance, conformity ( Conformity), i.e., how well the synthetic dataset reflects the distribution of the (real) reference dataset, diversity ( Diversity), how distinct individual samples in the synthetic dataset are, and faithfulness ( Faithfulness), i.e., how well the synthetic samples fits the desired data domain, in addition to standard task performance ( Performance). Understanding LLM Dataset Generation To better understand the generative abilities of LLMs, we apply our framework to four simple, but representative domains, each of which is chosen such that we can be sure that it is well-represented in the training data of common LLMs, and that a real-world reference dataset is readily available. This allows us to assess the overall data generation capabilities of LLMs with respect to these domains, and to compare different LLMs and sampling strategies. We evaluate the generative abilities of 22 LLMs in total – corresponding to different model families, fine-tuning methodologies, training datasets, available openly or via the OpenAI API – and a wide range of sampling configurations, including zero- and few-shot strategies. Inherent Trade-Offs In an in-depth analysis, we reveal underlying tradeoffs between distributional metrics, which we find to apply broadly across all domains and models. We observe a quadratic relationship between diversity and conformity, but that diversity and faithfulness are inversely correlated. Moreover, conformity and faithfulness exhibit a very high correlation, but our experiments also show that small variations in this regard very much characterize a model’s generative behavior. Comparing Models We also compare across models and, e.g., find that LLAMA-2’s generative abilities mainly improve over LLAMA-1 on conformity, while keeping other characteristics constant. With respect to training paradigm, we find that instruction-tuned models generally exhibit higher faithfulness, but much lower diversity, conformity and complexity when compared to their vanilla base model counterparts. Increasing sampling temperature with instruction-tuned models can bring them more on-par with vanilla models, but even then, neither paradigm clearly dominates a general performance ranking. Lastly, we repeatedly find that OpenAI’s instruction-tuned models exhibit very different generative behavior when compared to open instruction-tuned models like LLAMA-2, thus hinting at notable differences with respect to their (proprietary) training data and procedure. 2 Synthetic Dataset Generation We first discuss the relevant background of language modeling and synthetic dataset generation, and the concrete data generation procedure we rely on. (Large) Language Models for Text Completion In this work, we consider language models capable of performing text completion. While our focus lies on large language models, all we assume is a simple text generation interface. We thus use the term language model throughout the rest of this paper. Further, we consider models relying on different training regimes, including vanilla LMs trained on a standard text completion objective \cite{Brown2020}, \cite{Touvron2023a}, \cite{Almazrouei2023} and instruction-tuned LMs, trained via fine-tuning or reinforcement learning with human feedback \cite{Ouyang2022}, \cite{Touvron2023b}. Synthetic Dataset Generation with LMs Due to their strong generative capabilities, recent work has started to incorporate LMs for automated dataset generation, either to directly train downstream models \cite{Taori2023}, \cite{Chiang2023}, or as part of a self-improvement process \cite{Haluptzok2023}. Given a domain \( D \), the goal is to construct a dataset \( S_D \) of samples that fit domain \( D \). Interesting choices for \( D \) include text of sentiment, forms of speech, instruction following and examples of e.g. puzzle solving. If this generation process additionally leverages some (small) existing reference dataset \( R_D \), it can also be understood as a form of LM-based data augmentation. In this work, we specifically consider dataset generation for classification. More specifically, we construct synthetic datasets \( S_D \), given a suitable instructive or few shot \cite{Brown2020} prompt. As \( D \), we choose common domains like movie reviews or posts in online forums, because we can safely assume that these lie in-distribution for all considered LMs and human-curated reference datasets \( R_D \) are readily available for comparison. More importantly, common data domains allow us to measure the extent to which the LMs have learned a good representation of these data domains during training. For each domain, e.g. movie reviews, we define a set of classes \( \{c_1, \ldots, c_n\} \), e.g. positive, undecided, negative, etc. To generate synthetic data, we prompt an LM to produce new dataset samples that fit the different classes \( c_i \), using class- and domain-specific prompts \( p_i \). Generative Pipeline We illustrate the data generation pipeline we consider, in the left part of Fig. 1 (in green ○). Here, we generate samples for the domain \( D = \text{posts of a subreddit} \) (type of online forum) for subreddits of different topics, e.g. explainlikeimfive (eli5), a community where explanations in child-appropriate language are shared and askhistorians, where historians answer questions. We consider both zero-shot and few-shot sampling. In the zero-shot setting we prompt a model with "A question that appeared on the subreddit 'eli5'". Adapting this for each of the classes \( \{c_1, \ldots, c_n\} \), allows us to obtain a wide variety of labeled samples fitting domain \( D \). In the few-shot setting we additionally provide samples from the reference dataset \( R_D \). We experiment with varying sampling temperatures (higher temperature leading to higher entropy samples) to further analyze the tradeoffs present within an LM. Using this generative pipeline, we construct synthetic classification datasets for a number of exemplary domains (see §4), for which we also obtain (human-curated) real datasets as comparison point. 3 Evaluation Framework We now introduce our evaluation framework for the correct representation of common data domains \( D \) in LLMs. For this purpose, we compare datasets against a valid representation of a data domain \( D \) and select a human-curated reference dataset \( R_D \) for each domain \( D \). Furthermore, we need to define a set of characteristics that are indicative of data quality. We extend and adjust characteristics found in previous works on dataset generation to evaluate a given synthetic dataset \( S_D \) using five important characteristics: faithfulness, diversity, conformity, complexity, and downstream performance. Other than performance, of these metrics, faithfulness and conformity have been used in previous work directly, though not as the main focus of their evaluation \cite{Ye2022a}. Additionally, we modify the diversity metric used in these works to be suitable for our purposes and introduce complexity as a new characteristic to provide a full and comprehensive evaluation of a synthetic dataset. We now describe each of these characteristics in detail. Faithfulness We start by considering faithfulness, i.e., how well a dataset fits the given domain $D$. Faithfulness quantifies how much noise is introduced by the generation process that may impair model training. To measure faithfulness, we fine-tune a small classifier $M_{R_D}$ on the reference dataset $R_D$, and evaluate its performance in terms of accuracy on the respective synthetic dataset $S_D$. We thus measure faithfulness as $$\text{faithfulness}(S_D) = \text{accuracy}(M_{R_D}, S_D).$$ We note that faithfulness does not only measure the correctness of labels associated with generated samples. It is also influenced by the quality of the generated samples themselves and whether they are representative of the reference dataset since non-representative samples are more likely to be misclassified by the classifier. Furthermore, while the classifier can provide an estimate of the faithfulness of the dataset, it is not a perfect measure and may be influenced by the quality of the classifier. However, since the classifier is the same for all synthetic datasets, we can still use it to compare the faithfulness of different datasets. Diversity While LMs may generate faithful datasets, we need to ensure that the resulting samples are diverse rather than repetitive. To account for this, we also measure diversity, i.e., how distinct individual samples in the dataset are. Previous work on text dataset generation rely on Self-BLEU (Zhu et al., 2018) or Distinctness-$n$ (Li et al., 2016) to measure diversity. However, these metrics are not suitable for the purposes of evaluating the diversity of the text generated by LMs. Indeed, Self-BLEU and Distinctness-$n$ exhibit a logarithmic dependence on dataset size as demonstrated in App. A. Therefore, these metrics are not directly comparable across different datasets and cannot be used to evaluate the inherent diversity of an LMs within a given domain. We therefore propose a normalized version of Distinctness-$n$ to correct for its size dependence. We do so by averaging Distinctness-$n$ over random subsets of the dataset of fixed size $k$. By keeping $k$ constant throughout all experiments, this metric is directly comparable across different datasets. More concretely, given a dataset $S_D$, let $L(S_D)$ be the multi-set obtained by lemmatizing all samples in $S_D$ and collecting the obtained words. Let $C(X)$ denote the unique number of tokens among $X = x_1, ..., x_k$. We define the diversity as $$\text{diversity}_k(S_D) = \frac{1}{k} \mathbb{E}_X [C(x_1, ..., x_k)]$$ where $X \subseteq L(S_D), |X| = k$. Conformity While text generated with recent iterations of instruction-tuned models (Chiang et al., 2023; OpenAI, 2023; Geng et al., 2023) can be of high quality, diverse and faithful, a resulting dataset may still not fit the distribution of human-written text in a more casual setting due to the inability of these models to generate human-like text. Since common data domains contain a lot of internet-based dialect, overly high-quality responses may fall out of distribution. For example, a dataset for movie review analysis may contain both positive and negative reviews, but overall writing skills per author may vary. If a corresponding synthetic dataset only contains high-quality reviews, it may not be representative of the real distribution of reviews. To capture this, we measure conformity to quantify the similarity between the distributions of a synthetic dataset and a real reference dataset. For this, we employ the MAUVE metric (Pillutla et al., 2021), which indicates differences between two text distributions by calculating the Kullback-Leibler (KL) divergence between their smoothed representation in sentence embedding space. $$\text{conformity}(S_D) = \text{mauve}(R_D, S_D)$$ Complexity Furthermore, it is possible that synthetic data looks natural and diverse, but the resulting samples are overly simplistic, e.g. when synthetic positive movie reviews only consist of reviews that are very good without any nuance in the samples. A classifier trained on an overly simplistic dataset has worse generalization error and therefore less utility. We therefore include the degree of data complexity as a core characteristic. To measure this, we train a small classifier $M_{\text{train}(S_D)}$ on a training split of the synthetic dataset $S_D$ under consideration, and evaluate its accuracy on a held-out (also synthetic) validation split. Based on this, we define the complexity inversely proportional to the resulting validation accuracy, as follows: $$\text{complexity}(S_D) = 1 - \text{accuracy}(M_{\text{train}(S_D)}, \text{val}(S_D))$$ Table 1: Overview of the models and training regimes considered in our comparative analysis. | | Vanilla | Instruction-Tuned | |------------------|------------------|-------------------| | | 350M 1.2B 6-7B | 13B 175B | 350M 1.2B 6-7B | 13B 175B | | GPT-3 | ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ | ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ | | GPT-3.5 | | ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ | | Falcon | ✓ ✓ | | | Llama-1 | ✓ ✓ | ✓ ✓ | | Llama-2 | ✓ ✓ | | | CodeLlama | ✓ ✓ | ✓ ✓ | † INSTRUCTGPT-3.5-175Bppo, INSTRUCTGPT-3.5-175Bchat and INSTRUCTGPT-3.5-175Bchat-instruct ‡ We use Vicuna-7B, 13B as instruction-tuned Llama-1 models. This metric allows us to measure the complexity of a dataset by measuring the generalization error on the same distribution. If this error is very low, the dataset is overly simplistic and the model can likely not generalize well to samples from the unseen reference dataset and has therefore low utility. Performance Overall, the four previous characteristics are summarized in the performance of a synthetic dataset. The generalization performance is measured by training a model on the synthetic dataset $S_D$ and evaluating it on the reference dataset $R_D$. We therefore report the accuracy of a model $M_{S_D}$ trained on $S_D$ and evaluated on the reference dataset $R_D$ as the metric $$\text{performance}(S_D) = \text{accuracy}(M_{S_D}, R_D).$$ 4 EVALUATION To assess the generative abilities of different models, we apply our evaluation framework to a wide range of different models belonging to five different size classes (350M – 175B), four different families (GPT, Falcon, Llama-1 and LLama-2) and two different training regimes (vanilla, i.e., non-instruction-tuned, and instruction-tuned). In this section, we first describe our experimental setup and then discuss our main results through two lenses: (1) We identify three inherent tradeoffs between our core characteristics, that we observe consistently across all models, and, (2) we compare the different models and training regimes in terms of their generative performance, as measured by our framework. Data Domains We choose common data domains that we can safely assume to be in-distribution for all examined models, and that we can find real-world reference datasets for. Concretely, we use AGNews (Zhang et al., 2015) to perform news genre classification for news headlines, SST-2 (Socher et al., 2013) for sentiment analysis of movie reviews, ELI5 for subreddit classification of online forum posts (Fan et al., 2019) and a subset of GoEmotions (Demszyk et al., 2020) for general emotion classification. For more details on these domains and the reference datasets, we refer to App. B. Models and Prompting We provide an overview of all models (Almazrouei et al., 2023; Touvron et al., 2023a,b; Chiang et al., 2023; Brown et al., 2020) and training regimes (Ouyang et al., 2022) considered in our analysis in Table 1. For data generation, we mostly rely on simple zero-shot instructive prompts to generate data for a given domain, allowing us to directly access the raw distribution modeled by a given model. For the classifiers trained as part of our evaluation framework, we fine-tune pre-trained DistilBERT (Sanh et al., 2019) models. For further details we refer to App. C. Sampling and Aggregation For each domain and model, we generate synthetic datasets of 3000 samples each. We do so for up to 5 different sampling temperatures $T \in \{0.7, 1.0, 1.3, 1.6, 1.9\}$ for instruction-tuned models and $T \in \{0.7, 1.0, 1.3\}$ for vanilla models as samples quickly become degenerate. For most models, we also include a sample from the nucleus distribution with $p = 0.9$. To account for the stochasticity of the sampling process, we generate 5 datasets per configuration and report the average results for our metrics across these 5 datasets. In App. E we discuss the resulting standard deviations, which are small and do not impact the conclusions presented here. Figure 2: Tradeoffs between various metrics in the zero-shot setting. From left to right: Tradeoffs in diversity and conformity, diversity and faithfulness, and complexity and faithfulness. Arrows indicate the direction of higher sampling temperature for the same model. In our main evaluation, we always report the average of our metrics across all considered data domains, and typically focus only on a subset of models, to facilitate readability. Still, our results hold for all considered models and domains, unless otherwise noted. We include full results in App. E. 4.1 MODEL-INHERENT CHARACTERISTICS We first look at model-inherent characteristics by considering tradeoffs between our core characteristics, sampling from the same model with varying temperature. Naturally, higher sampling temperature can be expected to correlate with diversity, however, we further observe other meaningful tradeoffs. To illustrate, Fig. 2 shows the underlying relationships of our characteristics, plotting diversity, conformity, faithfulness and complexity, where directed arrows indicate the effect of increasing sampling temperature with a given model. We now discuss each of these plots in turn. In App D, we show that the same tradeoffs hold when the dependent variable is model size. 1. Conformity v. Diversity We find that as conformity increases, diversity increases (Fig. 2 left). However, as soon as a threshold in reference diversity is reached, this trends reverses. We explain this by that fact that conformity actually measures closeness of the synthetic data distribution to the real reference distribution. At the same time, diversity can be seen as a measure for how wide this distribution is. When the width (or diversity) of the reference and synthetic dataset match, conformity will generally be higher. However, lower diversity means that the resulting dataset provides a very narrow view on the data domain, while higher diversity results in a dataset that represents concepts from outside the data domain as well. More technically, we observe a quadratic dependence of conformity and diversity, which is statistically significant (p-value < 0.001) and centered around a diversity of 0.5, slightly higher than the average diversity of the reference dataset. 2. Diversity v. Faithfulness We observe an inverse linear relationship between diversity and faithfulness (Fig. 2 middle), as faithfulness decreases with increasing diversity. This is because higher diversity indicates samples from a wider distribution which generally also includes samples atypical for the domain and reference dataset, i.e. samples that are not faithful to the reference dataset. Here, the difference between instruction-tuned (dashed lines) and vanilla (full lines) models is especially notable. While vanilla models generate more diverse datasets for the same temperature, for a fixed level of faithfulness, instruction-tuned models generate more diverse datasets. 3. Faithfulness v. Complexity Finally, we observe a strong linear relationship between faithfulness and complexity (Fig. 2 right), with a Pearson correlation coefficient of $-0.93$. In an ideal scenario, a classification model trained on the generated dataset is equal to the model trained on the reference datasets. In such a case, faithfulness would equal $1 - \text{complexity}$. However, as shown in the figure, we find that models appear shifted with respect to each other and do not follow the inverse relationship perfectly. This suggests the existence of a model-inherent faithfulness-complexity ratio, which in turn is an important indicator of dataset quality. In fact, further linear analysis reveals that the sum of the faithfulness and complexity metrics is an equally good predictor of dataset performance as the individual metrics, showing that the main dependence of performance is captured by their sum. Table 2: Comparing LLAMA-based and LLAMA-2-based model for sampling temperature $T = 1$ in the zero-shot setting. Metrics for real data are measured with respect to a held-out validation set. | Model Name | Complexity | Faithfulness | Diversity | Conformity | Performance | |------------------|------------|--------------|-----------|------------|-------------| | Real data | 0.145 | 0.855 | 0.466 | 0.963 | 0.855 | | LLAMA-7B | 0.235 | 0.708 | 0.439 | 0.357 | 0.749 | | LLAMA-2-7B | 0.238 | 0.714 | 0.449 | 0.440 | 0.754 | | VICUNA-7B | **0.113** | 0.815 | **0.365** | **0.222** | 0.744 | | LLAMA-2-CHAT-7B | 0.065 | **0.860** | 0.334 | 0.173 | **0.749** | ### 4.2 MODEL COMPARISON Going beyond model-inherent characteristics, we now compare across different models and training regimes in terms of our evaluation metrics and the identified tradeoffs from the previous section. We first discuss the effect of instruction-tuning and then compare the different model families. Finally, we compare the different models in terms of their overall performance. #### Instruction-tuning Firstly, we observe a clear difference between instruction-tuned and vanilla models. While neither consistently balances all metrics and tradeoffs, we generally find that instruction-tuned models are substantially more faithful, but exhibit less diversity than vanilla models (see Fig. 2 middle). Only at the highest sampling temperatures ($T = 1.9$) instruction-tuned models achieve similar faithfulness levels as their vanilla counterparts at the lowest temperatures ($T = 0.7$). As shown in Fig. 2(left), for LLAMA-{1, 2} and FALCON we can observe that instruction-tuned models generally exhibit less conformity, compared to their vanilla variants. We explain this with the high levels of curation with instruction-tuning datasets, whereas some of the reference datasets contain ungrammatical or otherwise malformed samples. Interestingly, for OpenAI instruction-tuned models like INSTRUCTGPT-3-175B we observe the opposite behavior, even when considering OpenAI models of the same size as the LLAMA and FALCON variants. We show a full comparison of all models in App. E. Considering complexity, we find that both vanilla and instruction-tuned synthetic datasets tend to be more complex at higher sampling temperatures (see Fig. 2(right)). However, instruction-tuned models behave significantly worse in the faithfulness-complexity-tradeoff, i.e. achieve lower faithfulness at similar complexity levels. Again, OpenAI models like INSTRUCTGPT-3-175B appear to defy this and can match the vanilla models in this regard. For downstream performance, we generally observe lower scores for instruction-tuned LLAMA-{1, 2} and FALCON models, compared to their vanilla counterparts. However, for some of OpenAI’s models the instruction-tuning process appears to actually enhance generative abilities with respect to our metrics, and thus also downstream performance. Overall, our metrics characterize the generative abilities of instruction-tuned OpenAI models very differently from comparable LLAMA and FALCON variants. This leads us to believe that OpenAI’s concrete and proprietary instruction-tuning process and dataset must be substantially different from the ones used for LLAMA and FALCON. #### Comparing Model Families In Table 2, we compare LLAMA-7B and LLAMA-2-7B. We find that conformity serves as the primary indicator for differentiating LLAMA vanilla models of different generations, with LLAMA-2-7B showing significantly improved results. With respect to instruction-tuning, we analyze the difference between VICUNA-7B and LLAMA-2-CHAT-7B. We find that in pure generative abilities according to our framework, VICUNA-7B outperforms LLAMA-2-CHAT-7B across almost all metrics, with the exception of faithfulness. Notably, VICUNA-7B only fine-tunes the model on dataset of instruction prompts, whereas the fine-tuning process for LLAMA-2-CHAT-7B is more extensive [Touvron et al., 2023b], using multiple phases of fine-tuning and RLHF, similar to OpenAI’s instruction-tuning process [Ouyang et al., 2022]. This suggests that the more extensive fine-tuning process of LLAMA-2-CHAT-7B does not necessarily lead to better generative abilities, at least not according to our metrics or downstream performance. We show the results for other temperatures in App. E but note that the conclusions and trends discussed here do not change. Table 3: Comparing models for the best performing temperature in the zero-shot setting. T is the optimal sampling temperature. | Model Name | T | Perf. | |---------------------|-----|-------| | INSTRUCTGPT-3-175B | 1.9 | 0.767 | | LLAMA-2-7B | 0.7 | 0.760 | | LLAMA-13B | 0.7 | 0.758 | | VICUNA-7B | 1.9 | 0.757 | | LLAMA-2-CHAT-7B | 1.6 | 0.755 | | : | : | : | | INSTRUCTGPT-3.5-175B_chat | 1.9 | 0.715 | | GPT-3-350M | 1.0 | 0.707 | | INSTRUCTGPT-3-350M | 1.3 | 0.707 | Table 4: Comparing models for the best performing temperature in the few-shot setting. T is the optimal sampling temperature. | Model Name | T | Perf. | |---------------------|-----|-------| | LLAMA-2-CHAT-13B | 1.6 | 0.775 | | INSTRUCTGPT-3-175B | 1.3 | 0.775 | | VICUNA-13B | 1.6 | 0.768 | | LLAMA-2-CHAT-7B | 1.6 | 0.764 | | VICUNA-7B | 1.6 | 0.764 | | : | : | : | | INSTRUCTGPT-3.5-175B_chat-instruct | 1.3 | 0.744 | | INSTRUCTGPT-3-350M | 0.7 | 0.723 | | INSTRUCTGPT-3.5-175B_chat | 1.3 | 0.711 | Model Comparison To compare downstream performance independent from sampling configuration, we consider maximum downstream performance per model, choosing the best sampling temperature individually. We report the summary of the resulting ranking in Table 3, with full results in App E. Interestingly, among the top positions we see both vanilla (LLAMA-2-7B, LLAMA-13B) and instruction-tuned models, suggesting that instruction-tuning does not necessarily enhance inherent generative capabilities. Further, we find that specifically INSTRUCTGPT-3.5-175B_chat model scores very poorly on all metrics, including downstream performance. On closer look we find that its training regime appears to primarily optimize faithfulness at the expense of other distributional characteristics. This results in very poor downstream performance, only slightly better than the worst performing models GPT-3-350M and INSTRUCTGPT-3-350M, which are also much older than INSTRUCTGPT-3.5-175B_chat. Surprisingly, we find that the best model INSTRUCTGPT-3-175B is closely followed by much smaller LLAMA-based models, suggesting that a large model size is not necessarily a requirement for good generative abilities in a distributional sense. Few-Shot Performance While in most of our experiments we rely on simple instructive prompts, we also consider few-shot prompting. Specifically, we select 10 samples from each data domain and use three random samples from those 10 samples for each sample query. With this, we can significantly boost the performance of instruction-tuned models specifically, as the additional variation and specificity during prompting helps address their low diversity and conformity scores. We report the updated ranking of downstream performance in Table 4. With this, instruction-tuned models dominate the top positions, with INSTRUCTGPT-3-175B and LLAMA-2-CHAT-13B sharing the first place. Interestingly, we find that vanilla models are now completely absent from the top five, indicating that the few-shot procedure is effective at mitigating the issue of instruction-tuned models regarding diversity and conformity. INSTRUCTGPT-3.5-175B_chat on the other hand, moves to the last position, suggesting that even few-shot prompting cannot address the short-coming of chat-training for synthetic dataset generation. 5 RELATED WORK Synthetic data generation using LLMs has been explored for various applications and use-cases. We briefly discuss each of these research areas. Dataset generation for zero-shot learning Recent works (Ye et al., 2022a,b; Gao et al., 2023; Meng et al., 2022) have proposed alternative strategies for zero-shot learning due to the increasing size of foundation models. These works adopt a different paradigm that leverages large language models (LLMs) to generate synthetic datasets and train smaller, task-specific models on these datasets for downstream tasks. Ye et al. (2022a) pioneers this approach by introducing ZeroGen, a framework that generates synthetic data using LLMs for various downstream tasks. Following this initial work, several studies have focused on improving the performance of models trained on synthetic data by addressing potential issues such as fitting to noisy samples and enhancing generalization to real-world applications (Gao et al., 2023; Meng et al., 2022). Additionally, Ye et al. (2022b) proposes an iterative few-shot prompting method that incorporates influential samples from the synthetic data to increase dataset size, further refining the data generation process. Despite these advances, current studies do not consider larger foundation models and neglect to analyze the trade-offs and relationships between the different characteristics of a dataset. Several works generate datasets for specific tasks. Schick and Schütze (2021) uses an instruction combined with an input sentence and uses a LM to generate a new sample, where the label relationship between the original and generated sentence can be derived from the instruction. Josifoski et al. (2023) generates data for complex tasks, such as structured data, by leveraging asymmetry in tasks. Self-improvement Using synthetic data to self-improve the foundation model has achieved a lot of attention recently. In particular, Bai et al. (2022b) introduced a novel method for training models to be more helpful using self-improvement by allowing the trained model to both generate and evaluate its own outputs and therefore iteratively improving its own performance. Similarly, Wang et al. (2022b) developed an approach wherein the model generates its own instruction dataset, which is then used for fine-tuning itself. Huang et al. (2023) focused on enhancing the model’s capability in reasoning tasks by training it on its own high-confidence outputs. Taking a different approach, Halupizok et al. (2023) aimed to enhance the code generation capabilities of the model. By employing the model’s own output, they generated and selected code snippets to use training samples, ultimately improving the model performance in code-related tasks. Finally, to reduce the toxicity of AI-generated text, Wang et al. (2022a) proposed a method to fine-tune language models on non-toxic data. Dataset Augmentation Dataset augmentation has a rich history, with various strategies employed to improve the performance of models. Techniques such as back-translation (Sennrich et al., 2016), c-BERT word replacement (Wu et al., 2019), or a combination of different methods (Qu et al., 2021) have been explored. Recently, LMs have also been used for data augmentation. For instance, Yang et al. (2020) generates samples using foundation models for commonsense reasoning tasks and incorporates the most diverse and informative samples into their dataset. Moreover, Dorner et al. (2023) utilizes foundation models for unsupervised style transfer. Chia et al. (2022) generates synthetic data for relation triplet extraction, where the goal is to extract two parts of the prompt as well as their relation label. Bonifacio et al. (2022) generates data for an information retrieval task, but do require a few examples for each class. Generation for specific purpose Several works have focused on generating datasets for specific applications using foundation models. For instance, Chen et al. (2023) developed a dataset for social conversations, while Hartvigsen et al. (2022) introduced a new large-scale dataset for toxicity analysis. Additionally, Yuan et al. (2022) presented a human-in-the-loop dataset generation technique and employed it to create a dataset on biographies. LLM Evaluation A wide range of holistic multi-task multi-metric frameworks (Liang et al., 2022; Gao et al., 2021a; Hendrycks et al., 2021) as well as domain-specific evaluation suites (Guha et al., 2023) for the evaluation of LLMs have been proposed. These frameworks often build on existing tasks, such as question answering (Clark et al., 2018; Bhakthavatsalam et al., 2021; Lin et al., 2022), language understanding (Wang et al., 2019) or sentence completion (Zellers et al., 2019). While assessing models on a board set of downstream tasks, to the best of our knowledge, none of these works measure the models capacity for dataset generation. 6 CONCLUSION Through a comprehensive evaluation of synthetic datasets generated by LLMs, our study revealed inherent tradeoffs between dataset diversity, complexity, conformity, faithfulness and performance. We show that these trade-offs generalize across data domains and models, allowing us to study differences between instruction-tuned and vanilla models. These results highlight differences in model characteristics, e.g., how different models in the LLaMA family differ. We further find that ChatGPT (INSTRUCTGPT-3.5-175Bchat) generates very faithful datasets, but lacks in all other models in terms of complexity, diversity and conformity resulting in a worse downstream performance compared to other models. Our study marks a crucial step towards a more nuanced understanding of dataset generation by LLMs, shedding light on the behavior of various models. 7 REPRODUCIBILITY We include our code, prompts, and detailed instructions on how to reproduce our results as part of the supplementary material of this paper.
3PWYAlAQxv
Could the authors elaborate on what they observe in figure 4? Specifically, how it relates to rank structures in permutation groups (e.g. larger/smaller cycles take longer/shorter to train), and how it relates to weight consolidation/pruning/weight projection. The connections aren't obvious to the reader.
Neural Networks Trained by Weight Permutation are Universal Approximators Anonymous authors Paper under double-blind review Abstract The universal approximation property is fundamental to the success of neural networks, and has traditionally been achieved by networks without any constraints on their parameters. However, recent experimental research proposed an innovative permutation-based training method, which can achieve desired classification performance without modifying the exact values of the weights. In this paper, we prove that the permutation training method can guide a ReLU network to approximate one-dimensional continuous functions. Our numerical results under more diverse scenarios also validate the effectiveness of the permutation training method in regression tasks. Moreover, the notable observations during weight permutation suggest that permutation training can provide a novel tool for describing network learning behavior. 1 Introduction The universal approximation property (UAP) of neural networks is a cornerstone in the theoretical guarantee of deep learning, proving that even the simplest two-layer feedforward networks can accurately approximate any continuous function (Cybenko [1989], Hornik et al. [1989], Leshno et al. [1993]). This fascinating ability allows neural networks to replace critical, challenging components in existing frameworks to enhance efficiency (Mnih et al. [2013], Goodfellow et al. [2014], Kaisst et al. [2019]). Despite the extensive study in various settings, existing research on the UAP rarely imposes restrictions on the network parameters. However, in certain application scenarios, constraints posed by some specific requirements are essential (Nugent [2005], Kosuge et al. [2021a]). As a constrained scenario, Qiu & Suda [2020] empirically showed that, without altering the exact value of network weights, only permuting the initialized weight vector can achieve comparable or better performance for image classification tasks. This unique property makes the permutation-based training method attractive for specific hardware applications, such as fixed-weight accelerators (Kosuge et al. [2021a]). It can also facilitate the implementation of physical neural networks (Nugent [2005], Feldmann et al. [2021]) (see App. A for details). Motivated by this impressive result, we are intrigued to investigate whether this permutation training method still possesses UAP. In this paper, we theoretically establish the UAP of a permutation-trained network with rectified linear unit (ReLU) activation functions for any one-dimensional continuous function. To address the non-trivial challenge posed by the permutation training setting, the key idea of our proof is a four-pair construction of the step function approximators, which helps us to approach the targeted continuous function with a piecewise constant function (Stein & Shakarchi [2009]). Additionally, a processing method is proposed to eliminate the impact of the remaining weights. Moreover, our numerical experiments not only validate the theoretical results but also demonstrate the widespread existence of the UAP of permutation training in diverse initializations. The patterns observed during permutation training also highlight its potential in describing learning behavior, relating to topics like the pruning technique (Frankle & Carbin [2019]) and continual learning (Maltoni & Lomonaco [2019], Zeng et al. [2019]). We summarize the main findings of this paper below: • We prove the UAP of permutation-trained networks with equidistant initialization and pairwise random initialization to one-dimensional continuous functions. • We conduct numerical experiments of regression problems under various generalized settings, identifying the common occurrence of the UAP of permutation training. • By observing the permutation patterns, we find that permutation training could potentially serve as a new approach to describe the detailed network learning behaviors. Related works. The UAP has been extensively studied in various settings, leading to many efficient applications. It is well known that fully connected networks are universal approximators for continuous functions (Cybenko [1989], Hornik et al. [1989], Leshno et al. [1993]). Additionally, the UAP of continuous functional and operator are presented by Chen & Chen [1995], giving rise to the operator learning formalisms such as DeepONet (Lu et al. [2021]). However, the traditional UAP is suited to wide networks where the weights are freely adjusted. Our configuration is focused on a specific approach that only allows permuting weights. Permutation is crucial in deep learning and closely relates to permutation equivariant or invariant networks (Cohen & Welling [2016]), designed to learn from symmetrical data (Zaheer et al. [2017], Lee et al. [2019]). It is also evident in graph-structured data which inherently exhibit permutation invariance (Maron et al. [2019], Satorras et al. [2021]). However, these works mainly concern issues with intrinsic symmetry, while permutation training is not limited to these scenarios. As for the weight permutation attempts, Qiu & Suda [2020] empirically proposed the first (to our knowledge) weight-permuted training method, which exhibits comparable classification performance and has been practically applied as a fixed-weight accelerator (Kosuge et al. [2021a,b]). A further discussion about this method’s advantages in hardware implementation is given in App. A. Our work provides theoretical guarantees of this method and considers some regression tasks numerically. Additionally, Scabini et al. [2022] improved the initialization by rewiring neurons from the perspective of computer networks, but the training methods are unchanged. Permutation training is also closely related to the permutation symmetry and linear mode connectivity (LMC) (Frankle et al. [2020], Entezari et al. [2021]). The LMC suggests that after a proper permutation, most SGD solutions under different initialization will fall in the same basin in the loss landscape. Similarly, our permutation training also seeks a permutation to effectively improve performance. Therefore, the search algorithm utilized in LMC (Jordan et al. [2023], Ainsworth et al. [2023]) can serve as a reference for the permutation training algorithm, and vice versa. Moreover, it would be interesting to explore the LMC between different permutation training solutions. Outline. We state the main result in Sect. 2, which includes ideas to derive the main result. In Sect. 3, we provide a detailed construction of the proof. The numerical results of permutation training are presented in Sect. 4, along with the observation of permutation behavior during the training process. Finally, the conclusion is provided in Sect. 5. All formal proof of the theorems is in the Appendix. 2 NOTATIONS AND MAIN RESULTS 2.1 Neurual networks architecture We start with a two-layer feed-forward ReLU network with $N$ hidden neurons in even numbers (i.e., $N = 2n$). It has the form of a linear combination of ReLU basis functions (noted as $\text{ReLU}(z) = \max\{z, 0\}$) as $f(x) = \sum_{i=1}^{N} a_i \text{ReLU}(w_i \cdot x + b_i) + c$. Particularly, we focus on approximating one-dimensional functions, so all weights are scalars $(w_i, b_i, a_i, c \in \mathbb{R})$. Since ReLU activation is positively homogeneous (i.e., $\text{ReLU}(\gamma x) = \gamma \text{ReLU}(x)$ for all $\gamma > 0$), we consider a simplified homogeneous case with $w_i = \pm 1$, and utilize $n$ to divide the basis functions into two parts as $$\phi_i^\pm(x) = \text{ReLU}(\pm (x - b_i)), \quad i = 1, 2, ..., n,$$ where the biases $\{b_i\}_{i=1}^n$ determine the location of basis functions. Then we introduce a one-dimensional linear layer. It will be shown later that while this layer is not essential for achieving UAP, it does simplify the proof and offer practical value. The network’s output function $f^{\text{NN}}$ gives $$f^{\text{NN}}(x) = \alpha + \gamma \sum_{i=1}^{n} [p_i \phi_i^+(x) + q_i \phi_i^-(x)],$$ where $\{p_i, q_i\}_{i=1}^n$ are the coefficients of the basis functions and $\alpha, \gamma$ are scaling factors. This form corresponds to a three-layer network, where $\{p_i, q_i\}_{i=1}^n$ and $\{\alpha, \gamma\}$ are the parameters in the second hidden layer and output layer, respectively. 2.2 Weight configuration and main theorems Without loss of generality, we consider the target continuous function \( f^* \in C([0,1]) \). During the permutation training process, we hold the initial value of the second hidden layer’s weight vector \( \theta^{(n)} = (p_i, q_i)_{i=1}^n \) and only update the order relationship of its components, leading to the following configuration: the weight vector \( \theta^{(n)} \) is permuted from a predetermined vector \( W^{(n)} \in \mathbb{R}^{2n} \). We first focus on a simple scenario with equidistantly distributed location \( B^{(n)} \) and pairwise coefficients \( W^{(n)} \). The UAP of a permutation-trained network to continuous functions can be stated as follows: **Theorem 1** (UAP with a linear layer). For any function \( f^*(x) \in C([0,1]) \) and any small number \( \varepsilon > 0 \), there exists a large even number \( n \in \mathbb{Z}^+ \), and \( \alpha, \gamma \in \mathbb{R} \) for \( f^{\text{NN}} \) in Eq. (2) with equidistantly distributed \( B^{(n)} = \left(0, \frac{1}{n-1}, \ldots, 1\right) =: (b_i)_{i=1}^n \) and corresponding \( W^{(n)} = (\pm b_i)_{i=1}^n \), along with a permuted weight vector \( \theta^{(n)} = \tau(W^{(n)}) \), such that \( |f^{\text{NN}}(x) - f^*(x)| \leq \varepsilon \) for all \( x \in [0,1] \). The intuition of this result comes from the rich expressive possibility of permutation training since there are \((2n)!\) different permutations for \( W^{(n)} \). Next, we enhance the result in Theorem 1 to a purely permuted situation, suggesting the UAP can be achieved without changing \( \alpha, \gamma \) as **Theorem 2** (UAP without the linear layer). Let \( \alpha = 0, \gamma = 1 \). For any function \( f^*(x) \in C([0,1]) \) and any small number \( \varepsilon > 0 \), there exists a large even number \( n \in \mathbb{Z}^+ \) for \( f^{\text{NN}} \) in Eq. (2) with equidistantly distributed \( B^{(n)} = (b_i)_{i=1}^n \) and \( W^{(n)} = (\pm b_i)_{i=1}^n \), along with a permuted weight vector \( \theta^{(n)} = \tau(W^{(n)}) \) such that \( |f^{\text{NN}}(x) - f^*(x)| \leq \varepsilon \) for all \( x \in [0,1] \). Although Theorem 1 can be viewed as a corollary of Theorem 2, the proof process will reveal the practical usefulness of learnable \( \alpha, \gamma \) in reducing the required network width to achieve UAP. Moreover, the result can be generalized to the scenario with random initialization, which is stated as **Theorem 3** (UAP for randomly initialized parameters). Given a probability threshold \( \delta \in (0,1) \), for any function \( f^*(x) \in C([0,1]) \) and any small number \( \varepsilon > 0 \), there exists a large even number \( n \in \mathbb{Z}^+ \), and \( \alpha, \gamma \in \mathbb{R} \) for \( f^{\text{NN}}_r \) in Eq. (2) with randomly initialized \( B^{(n)}_r \sim \mathcal{U}[0,1]^n \) and pairwisely randomly initialized \( W^{(n)}_r = (\pm p_i)_{i=1}^n, p_i \sim \mathcal{U}[0,1] \), along with a permuted weight vector \( \theta^{(n)} = \tau(W^{(n)}_r) \), such that with probability \( 1 - \delta \), \( |f^{\text{NN}}_r(x) - f^*(x)| \leq \varepsilon \) for all \( x \in [0,1] \). 2.3 Proof ideas To identify the UAP of our network (2) in \( C([0,1]) \), we employ a piecewise constant function, which is a widely-used continuous function approximator (Stein & Shakarchi, 2009), and can be expressed as a summation of several step functions. Next, we demonstrate that our networks can approximate each step function. In this spirit, our constructive proof includes three steps: 1. Approach the target function \( f^* \) by a piecewise constant function \( g \); 2. Approximate each step function of \( g \) by a subnetwork of \( f^{\text{NN}} \) with permuted coefficients; 3. Annihilate the unused basis functions and coefficients of \( f^{\text{NN}} \). Thanks to the Stone-Weierstrass theorem in function approximation theory (Stone, 1948), step 1 can be achieved by dividing the range of \( f^* \) with a uniform height to construct each step functions \( f_s \) (see Fig. 1(a)). The statement is outlined below (refer to App. B for detailed definition and proof), **Lemma 1.** For any function \( f^*(x) \in C([0,1]) \) and any small number \( \varepsilon > 0 \), there is a piecewise constant function \( g(x) \) with a uniform height \( \Delta h \leq \varepsilon \), such that \( |g(x) - f^*(x)| \leq \varepsilon \) for all \( x \in [0,1] \). The execution of step 2 is inspired by the divide-and-conquer algorithm in computer science (Hopcroft et al., 1983) and the multi-grid method in numerical analysis (Hackbusch, 2013). Suppose that the piecewise constant function \( g \) in Lemma 1 is a summation of \( J \) step functions \( \{f_{s_j}\}_{j=1}^J \), we partition the basis functions \( B^{(n)} \) also into \( J \) subgroups as \( B^{(n)} = \bigcup_{j=1}^J B_j \). Each subgroup \( B_j \) contains \( b_i \) distributed over the entire domain, instead of localized \( b_i \) (see Fig. 1(b)). This allows each subgroup to approach \( f_s \) at arbitrary locations using the same construction. --- 1 Fig. 7 in App. M intuitively shows various kinds of \( f^{\text{NN}}(x) \) under different permutations. Figure 1: Main idea of the construction. (a) Approximate the continuous function $f^*$ by a piecewise constant function $g$ which is further approximated by permuted networks $\tilde{f}^{\text{NN}}$. (b) Partition of basis functions. (c) The step function approximator $\tilde{f}_s^{\text{NN}}$ constructed by four-pair of basis functions located at $b_1, b_2, b_3, b_4$. (d) Summing pseudo-copies to adjust the heights of resulting function $\tilde{f}_s^{\text{NN}}$. Then, for every subgroup $B_j$, we construct a step function approximator $\tilde{f}_{s_j}^{\text{NN}}$ to approximate $f_{s_j}$, then sum them up to approach $g$. A core technique of this construction is utilizing four pairs of basis functions $\{\pm b_i\}_{i=1}^{4}$ (shown in Fig. 1(c)), along with a one-to-one correspondence between coefficients and biases (i.e., $\{p_i, q_i\}_{i=1}^{4} = \{\pm b_i\}_{i=1}^{4}$) to meet the permutation training setting, where each coefficient is used only once. This construction can also prevent conflict between different $B_j$. It is important to note that step 3 is necessary to achieve the desired construction. A crucial challenge of permutation training is that we must assign every parameter, rather than just pick up the wanted parameters and discard the rest. Therefore, it is essential to eliminate the remaining network parameters after step 2 to prevent the potential accumulation of errors. We solve this problem by a processing method that reorganizes them into a linear function with controllable slope and intercept. To further enhance the conclusion of Theorem 1 to Theorem 2, we introduce a technique called pseudo-copy, which can achieve UAP without the linear layer. By refining the parameters distribution, several pseudo-copies $\tilde{f}_s^{\text{NN}}$ of the original approximator $\tilde{f}_s^{\text{NN}}$ can be produced with a controllable error (see Fig. 1(d)). The final height can then be adjusted by stacking these copies together, making the scale parameters $\alpha, \gamma$ in Theorem 1 removable. Extending the UAP to the random initializations is justified by that as the width increases, the parameters randomly sampled from uniform distributions become denser, thus approaching the equidistant case. Therefore, a sufficiently wide network has a high probability of finding a subnetwork that is close enough to the network with UAP in the equidistant case. Then this subnetwork can also achieve UAP due to its continuity. The remaining part of the network can be eliminated by step 3. 3 UAP OF PERMUTATION-TRAINED NETWORKS This section provides a detailed construction of the approximator with weight-permuted networks in the equidistant case, along with an estimation of the convergent rate of approximation error. The extension to the scenario with random initialization is also thoroughly discussed. 3.1 THE FOUR-PAIR CONSTRUCTION OF STEP FUNCTION APPROXIMATORS We start with the equidistant case, and consider four pairs of basis functions $\{\phi_i^\pm\}_{i=1}^{4}$ in Eq. (1) and coefficients $\{p_i, q_i\}_{i=1}^{4} = \{\pm b_i\}_{i=1}^{4}$, where $b_1 \leq b_2 \leq b_3 \leq b_4$ along with a symmetric distance $d = b_2 - b_1 = b_4 - b_3$. The step function approximator $\tilde{f}_s^{\text{NN}}$ has a piecewise linear form as $$\tilde{f}_s^{\text{NN}}(x) = \sum_{i=1}^{4} p_i \phi_i^+(x) + \sum_{i=1}^{4} q_i \phi_i^-(x).$$ \hspace{1cm} (3) To ensure a local error of the approximator, we appeal \( f_{s}^{\text{NN}} \) to be \( x \)-independent outside the interval \([b_1, b_4]\). As a result, the coefficients \( p_i, q_i \) must satisfy \( \sum_{i=1}^{4} p_i = \sum_{i=1}^{4} q_i = 0 \), which implies the correspondence between \( \{p_i, q_i\}_{i=1}^{4} \) and \( \{\pm b_i\}_{i=1}^{4} \) as \[ \begin{align*} p_1 &= -b_1, & p_2 &= +b_2, & p_3 &= +b_3, & p_4 &= -b_4, \\ q_1 &= +b_4, & q_2 &= -b_3, & q_3 &= -b_2, & q_4 &= +b_1, \end{align*} \] and the detailed expression of \( f_{s}^{\text{NN}} \) is given in Eq. (9) at App. C. Notice that \( f_{s}^{\text{NN}} \) is monotone and centrally symmetric about the point \( x = \frac{b_2 + b_3}{2} \). So the abstract value of the two constant pieces \( x < b_1 \) and \( b_4 \leq x \) are the same. Then the height \( h \) of \( f_{s}^{\text{NN}} \) gives \[ h = 2(b_1^2 - b_2^2 - b_3^2 + b_4^2) = 4(b_2 b_3 - b_1 b_4) = 4d(b_4 - b_2). \] Along with a shifting scale \( h/2 \), it can approach step function \( f_s(x) = h \chi(x - s) \) with \( s \in [b_1, b_4] \), where \( \chi(z) = 1 \) when \( z > 0 \) and \( \chi(z) = 0 \) otherwise (see Fig. 1(c)). An example is plotted in Fig. 8 in App. M. It is obvious that the error \( \| (f_{s}^{\text{NN}} + h/2) - f_s \|_{L^\infty} \) has a trivial bound \( h \). ### 3.2 Annihilate the Unused Part of the Network After constructing step function approximators, the remaining parameters must be suitably arranged to eliminate their impact. Notice that a pair of basis functions \( \phi_i^\pm \) at each location \( b_i \) are either used together or not at all. Therefore, for each unused pair of \( \phi_i^\pm \) and the corresponding coefficients \( \pm p_i \), we can form a linear function \( a_i \ell_i \), where \( \ell_i(x) := p_i \phi_i^+(x) - p_i \phi_i^-(x) = p_i x - p_i b_i \) along with a freely adjusted sign \( a_i = \pm 1 \). The goal then is to choose a proper sign \( a = \{a_i\}_{i=1}^{n} \) for each \( \ell_i \) to control \( \| S_\ell \|_{L^\infty} \) in \([0, 1]\), where \( S_\ell(x) := \sum_{i=1}^{n} a_i \ell_i(x) \) is the summed function. It can be achieved by bounding the slope \( \sum_{i=1}^{n} a_i p_i \) with respect to \( a \), which becomes a problem of organizing addition and subtraction operations within a given series to reduce the final result. The following lemma provides a solution with an upper bound related to the largest gap in the series. **Lemma 2.** For an even number \( n \) and a sequence of real numbers \( \{c_i\}_{i=1}^{n} \) with \( c_i \in [0, 1], i = 1, 2, \cdots, n \), there exists a combination of sign \( \{a_i\}_{i=1}^{n} \) with \( a_i = \pm 1 \), such that \( 0 \leq \sum_{i=1}^{n} a_i c_i \leq \Delta c \), where \( \Delta c = \max_i |c_{i+1} - c_i| \) is the largest gap between the elements in \( \{c_i\}_{i=1}^{n} \). We prove the Lemma 2 by proposing a certain processing method (refer to App. D). As the network width increases, the distribution of \( p_i \) will become more dense, causing the largest gap \( \Delta p \to 0 \), thus the error introduced by the unused part can be arbitrarily small. Notice that the only assumption of this method is the pairwise initialization of coefficients like \( (\pm p_i)_{i=1}^{n} \), enabling the extension to random initializations. Besides, it also permits generalization to deeper networks by constructing an identity function and eliminating the remaining parts. Further details can be found in App. D. ### 3.3 Approximate Piecewise Constant Functions Now we briefly discuss how to permute equidistant coefficients \( W^{(n)} \) in \( f^{\text{NN}}(x) = \sum_{j=1}^{J} f_{s_j}^{\text{NN}}(x) \) to approximate piecewise constant function \( g(x) = \sum_{j=1}^{J} a_j \Delta h \chi(x - s_j) \) in Lemma 1 with accuracy \( \varepsilon \), where \( a_j = \pm 1 \) and \( \Delta h < \varepsilon / 2 \). The detailed proof is provided in App. E. We choose \( n \) sufficiently large to ensure that every approximator \( a_j [f_{s_j}^{\text{NN}}(x) + \frac{h}{2}] \) can approximate \( f_{s_j}(x) = a_j \Delta h \chi(x - s_j) \) with error \( h \). Since the height \( h \) in Eq. (5) may not equal \( \Delta h \), a multiplying factor \( \gamma = \Delta h / h \) is needed. Similarly, the accumulated \( h/2 \) shifting in each \( f_{s_j}^{\text{NN}} \) requires another scaling parameter \( \alpha \). Then the whole approximation, along with Lemma 1, allow us to prove the Theorem 1 since \[ |f^{\text{NN}}(x) - g(x)| = \left| \alpha + \gamma \sum_{i=1}^{n} \left[ p_i \phi_i^+(x) + q_i \phi_i^-(x) \right] - g(x) \right| \leq \Delta h < \varepsilon / 2, \quad \forall x \in [0, 1]. \] Next, we achieve UAP without the scaling parameters \( \alpha, \gamma \). The shifting scale \( \alpha \) can become small enough by constructing a constant function with a similar height (see App. F). To handle the mismatch between \( h \) and \( \Delta h \), we introduce the pseudo-copy technique, which stacks \( M \) copies of \( f_{s_j}^{\text{NN}} \) to reach the height \( \Delta h = Mh \) (see Fig. 1(d)). However, the copies’ locations cannot be identical since the biases \( B^{(n)} \) are uniquely assigned. Therefore, we refine the biases \( M \)-times and partition it into \( M \) subgroups as \( B^{(Mn)} = \bigcup_{l=1}^{M} B_l \) like Fig. 1(b). The pseudo-copy \( f_{s_j}^{\text{NN}} \) is then organized on each $B_t$, respectively. Since the pseudo-copies are very close to the original one, the refined approximation error $\|f_{s_i}^{\text{NN}} - f_s\|_{L^\infty}$ can also be controlled (refer to App. G). Theorem 2 can be proved as below, which indicates that constructing pseudo-copies requires a much larger network. $$|f_{s_i}^{\text{NN}}(x) - g(x)| = \left| \sum_{i'=1}^{M_n} [p_{i'} \phi_{i'}^+(x) + q_{i'} \phi_{i'}^-(x)] - g(x) \right| \leq \Delta h < \varepsilon/2, \quad \forall x \in [0, 1]. \tag{7}$$ ### 3.4 Estimate the Approximation Rate Here we estimate the approximation rate roughly by the $L^2$ error $E_s$ of approximating single step function $f_s(x) = h \chi(x - s)$ by $f_{s_i}^{\text{NN}}(x)$. Start with our four-pair construction in Eq. (3), assume $s = (b_2 + b_3)/2$ and rewrite the relations $b_1 = s - k_2, b_2 = s - k_1, b_3 = s + k_1, b_4 = s + k_2$, where $0 < k_1 \leq k_2$, then the error of single approximator gives (see App. H for details and a similar estimation for pseudo-copies) $$e_s^2 = \left\| \left( f_{s_i}^{\text{NN}} + \frac{h}{2} \right) - f_s \right\|_{L^2}^2 = \frac{8}{3}(k_1 - k_2)^2(k_1^3 + 3k_1^2k_2 + 2k_1k_2^2 + k_2^3) \leq \frac{56}{3}d^2k_2^3. \tag{8}$$ In our step function approximator in Eq. (4), the $k_2$ can be chosen as $k_2 \sim O(d)$, which implies $e_s \sim O(d^{5/2})$. However, the height $h$ in Eq. (5) also gives $h \sim O(d^2)$. To approximate the step function $f_s$ with height $\Delta h \sim O(1)$, the number of stacked pseudo-copy must satisfy $M = \frac{\Delta h}{h} \sim O(d^{-2})$. Hence the final error is estimated as $E_s = M e_s \sim O(d^{1/2})$. Recall that $d = \frac{1}{2n-T}$, we have $E_s \sim O(n^{-1/2})$, which means the approximation rate is roughly 1/2 order with respect to the network width. We will verify this rate by the experimental results in Sect. 4. ### 3.5 Generalize to the Random Initializations In extending the UAP to the common scenario involving random initializations, the basic proof ideas remain unchanged. However, constructing of step function approximators in Eq. (3) becomes invalid because the desired basis function cannot be located accurately. Nevertheless, the randomly sampled basis functions will become more dense upon increasing width, leading to a high probability of finding basis functions that closely match the required location. Therefore, we can first apply the UAP in the equidistant case to obtain a network $f_{s_i}^{\text{NN}}$ in Eq. (2), which exhibits approximation power. Then, within a randomly initialized network of sufficient width, we find a subnetwork $f_{s_i}^{\text{sub}}$ that can be regarded as randomly perturbed from $f_{s_i}^{\text{NN}}$. If this perturbation is small enough, the subnetwork $f_{s_i}^{\text{sub}}$ will also possess approximation power. Notice that this declaration can hold for totally random coefficients $W_r^{(n)} \sim U[0, 1]^{2n}$. However, eliminating the unused parameters by the process discussed in Sect. 3.2 requires a pairwise form such as $W_r^{(n)} = (\pm p_i)_i^{n-1}$. Therefore, we restrict our result to the case in Theorem 3. The detailed proof along with an estimation of the probability introduced by randomness are given in App. I. ### 4 Experiments This section presents numerical evidence to support and validate the theoretical proof. An interesting observation of permutation behaviors also highlights the theoretical potential of this method. #### 4.1 The Algorithm Implementation of Permutation Training In the implementation of permutation training, guidance is crucial in finding the ideal order relationship of the weights. The lookahead permutation (LaPerm) algorithm proposed in Qiu & Suda (2020) introduces an $k$-times Adam-based free updating (Kingma & Ba, 2015), where the learned relationship can then serve as a reference for permuting. To ensure the performance, the weights are permuted after every $k$ epoch. Apart from the fixed permutation period $k$ chosen by Qiu & Suda (2020), we also consider a relaxed algorithm with a gradually increased $k$ to learn sufficient information for the next permutation. The impact of $k$'s value on convergence behavior is evaluated to be negligible (see App. N). See App. J for a discussion of the original and relaxed LaPerm algorithms. 4.2 Experimental Setting of Function Approximation Tasks Now we carry out experiments for some regression problems to justify our theoretical results. We consider a three-layer network in Eq. (2), where the first hidden layer’s parameters are fixed to form the ReLU basis functions \( \{\phi_i^\pm\}_{i=1}^n \) in Eq. (1), and the weights \( \theta^{(n)} \) of the second hidden layer are trained by permutation. Moreover, \( \alpha, \gamma \) in the output layer are freely trained scaling factors to reduce the required network width. All the experiments below are repeated 10 times with different random seeds, and the error bars mark the range of the maximum and minimum values. Refer to App. I for the detailed experimental environment and setting for each case. 4.3 Approximating the One-Dimensional Continuous Functions For one-dimensional cases, we utilize a 1-2n-1-1 network architecture with random initializations discussed in Theorem 3. The approximation targets are typical continuous functions \( y = -\sin(2\pi x) \) and 3-order Legendre polynomial \( y = \frac{1}{2}(5x^3 - 3x) \), where \( x \in [-1, 1] \). A more complicated case about a Fourier series with random coefficients, along with the results of the equidistant scenario, are presented in App. Q. The numerical result illustrated in Fig. 2 exhibits a clear convergence behavior upon increasing \( n \). Our relaxed LaPerm algorithm doesn’t show a significant advantage, potentially due to the preliminary attempt of exponentially increasing \( k \). This suggests a need for advanced relaxation schemes, such as a self-adjusted strategy (Qiao et al., 2011). Furthermore, the \( L^\infty \) error exhibits a 1/2 convergence rate with respect to \( n \). Although the theoretical estimation in Sect. 3 is based on \( L^2 \) norm, we indeed observe that it also holds for \( L^\infty \) error. ![Figure 2](image) Figure 2: Approximating one-dimensional continuous function (a): \( y = -\sin(2\pi x) \) and (b): \( y = \frac{1}{2}(5x^3 - 3x) \) with randomly initialized network, where \( x \in [-1, 1] \). The inset in each panel presents the target function as lines and an example of the approximation result as dots. 4.4 The Performance of Various Random Initializations Here we discuss the impact of initialization on performance, which is more crucial for permutation training due to the weights’ lack of positional flexibility. Here we utilize the case in Fig. 2(a) to consider 8 different random initialization methods. Fig. 3 shows that the UAP in permutation-trained networks is not limited in the setting considered by our theorems. The converged random cases followed the pairwise initialization outperform the equidistant scenario, demonstrating the well-known advantages of random initializations. However, some commonly used random initializations, such as Xavier’s uniform initialization \( U_X \) (Glorot & Bengio, 2010), and He’s uniform and normal initializations \( U_H \) and \( N_H \) (He et al., 2015), fail to show convergence behavior. These results emphasize the incompatibility between the existing initializations and the permutation training setting. Further insight can be found by comparing the results in pairs. We first focus on totally and pair-wisely randomly initializing \( W^{(n)} \) from uniform distribution \( U[-1, 1] \), which are labeled as case 1 and 2, respectively. Apart from the clear dominance of pairwise case 1, the total case 2 also shows a certain degree of approximation power. Next, for a randomly initialized \( B^{(n)} \), in case 3 we let \( W^{(n)} \) have a strict correspondence like the equidistant case, while in case 4 \( W^{(n)} \) is initialized separately. The almost equivalent results indicate that the correspondence between \( B^{(n)} \) and \( W^{(n)} \) in Eq. (3) may not be necessary in random cases. Moreover, we apply the standard \( U_H \) for \( W^{(n)} \) in case 5. Figure 3: The performance of randomly initialized parameters to approximate $y = -\sin(2\pi x)$, where $x \in [-1, 1]$. The pairwise random distribution of $W^{(n)} = (\pm p_i)_{i=1}^n; p_i \sim U[-1, 1]$ is noted as $W^{(n)} \sim U_{[0, 1]^n}$, and the same applies to $U_X^{[\pm]}[0, 1]^n$ and $N_H^{[\pm]}[0, 1]^n$. The error bars are omitted for conciseness. The inset panel presents the target function as lines and an example of the approximation result as dots. and also for $B^{(n)}$ in case 6. It shows that case 5 achieves the best accuracy for larger networks ($n > 320$), while case 6 exhibits unexpected deterioration, which may be attributed to the mismatch of the scale in $B^{(n)}$. Finally, the default choices $N_H$ and $U_X$ in cases 7 and 8 both yield surprisingly poor performance, underscoring the need for new initializations suitable to permutation training. 4.5 Observation of the Permutation-Active Patterns This section aims to explore the theoretical potential of permutation training in describing network learning behavior. Based on the significant correlation between permutation and learning behavior as evidenced by Qiu & Suda (2020) and our investigation, we hypothesize that the permutation-active components of the weight vector play a crucial role in the training process. Therefore, by identifying and tracing the permutation-active part of weights, a novel tool that provides insights into learning behavior can be achieved, which also facilitates visualization and statistical analysis. As a preliminary attempt, we illustrate the permutation behavior of the coefficients $\theta^{(n)}$ in Fig. 4. The components that participated in the permutation are visually highlighted in dark green. The behavior clearly demonstrated that the order relationship evolves synchronously with the learning process, agreeing with the observation in Qiu & Suda (2020). Figure 4: The permutation behavior in the first 400 permutation iteration in approximating $y = -\sin(2\pi x)$ by equidistantly initialized network with $n = 640$. (a) The distribution of the active components (denoted by dark green color). (b) The frequency distribution illustrates the variation in the total count of active components in each permutation. (c) The corresponding loss behavior. Specifically, the distribution of the active components shows significant patterns, which can be classified into four stages (marked in red dash lines in Fig. 4). The loss declines sharply in the initial stage, while only the components with medium value are permuted. Once loss reaches a plateau in the second stage, more components are involved in permutation, evidencing the role of permutation in propelling the training. As loss starts to decline again, the permutation frequency correspondingly diminishes. Interestingly, the slower loss decrease gives rise to a ribbon-like pattern, akin to the localized permutations reported by Qiu & Suda (2020). This is possibly due to slow updates failing to trigger a permutation. This observation may support the existence of inherent low-dimensional structures within the permutation training dynamics, potentially linked to mathematical depiction of permutation groups, such as cycle decomposition (Cameron, 1999) and Fourier bases for permutation (Huang et al., 2009). Finally, the permutation’s saturation stage aligns with the stationary state of loss convergence. We believe these inspiring phenomena deserve further exploration. 5 CONCLUSION AND DISCUSSION As a constrained training method, permutation training exhibits unique properties and practical potential (see App. A). To verify its efficacy, we prove the UAP of permutation-trained networks with equidistant initialization and pairwise random initialization for one-dimensional continuous functions. The key idea is a four-pair construction of step function approximators in Fig. 1, along with a processing method to eliminate the impact of the remaining parameters. Our experimental results not only confirm the theoretical declarations (see Fig. 2), but also validate the approximation power for various random initializations in Fig. 3 establishing the prevalence of the UAP of permutation training. The discovery that certain commonly used initializations fail to achieve UAP also raises an intriguing question about the systematical characterization of initializations that satisfy UAP. The generalizability of our results holds significant importance. Extending to networks equipped with leaky-ReLU can be straightforward (refer to App. G for numerical evidence). Our approach also facilitates implementations within other architectures (see App. F for detailed discussion). However, extending our results to the high-dimensional scenario still faces some theoretical challenges, although some preliminary experimental attempts have been made for two-dimensional inputs (see App. K). One potential approach is similar to the discussion in Sect. 3.5 but here we can directly seek the subnetwork as a random perturbation from the network with conventional UAP in high dimensions. To achieve this, however, the processing method in Sect. 3.2 must be generalized from pairwise to total random initializations. We plan to address this problem in future work. Our observation in Sec. 4.5 suggests that permutation training is a novel tool to shed light on network learning behavior. It corresponds well with the training process and has systematical mathematical descriptions (Cameron, 1999; Huang et al., 2009). Specifically, the patterns observed in Fig. 4 can intuitively justify some weight categorization strategies, leading to potential benefits for consolidating the crucial weights for previous tasks (Maltoni & Lomonaco, 2019), or pruning to find the ideal subnetwork in the lottery ticket hypothesis (Frankle & Carbin, 2019). Additionally, the existing permutation training algorithm can be viewed as applying an order-preserving projection from the free training results to the initial weight value, sharing the same form as weight projection methods in continual learning (Zeng et al., 2019). This work is expected to facilitate the practical applications of permutation training. However, some issues still exist and deserve further investigation. Notably, existing initializations derived from the free training situation, such as He’s normal initialization, perform poorly with permutation training in Fig. 3 emphasizing the need for developing more compatible initializations. This could pave the way to effectively training higher-dimensional and deeper networks by weight permutation, thereby meeting the practical requirements. Further, the permutation training itself also has the potential to serve as an initialization protocol (Scabini et al., 2022). The existing attempts at algorithm implementations guide the permutation by Adam-based inner loops, thus incurring undesirable external computation costs. However, if the order relationships can be learned through other time-saving approaches, such as the learn-to-rank formalism (Cao et al., 2007), or permutation search algorithms in the study of LMC (Jordan et al., 2023; Ainsworth et al., 2023), the benefits of permutation training will be actualized in practice. Importantly, our proof is independent of algorithm implementations, which is expected to inspire and motivate the development of more advanced algorithms. Overall, we believe that the UAP of permutation-trained networks underscores the profound, yet undiscovered insights into how the weight encodes the learned information, highlighting the importance of further exploration in this field. REFERENCES Samuel Ainsworth, Jonathan Hayase, and Siddhartha Srinivasa. Git Re-Basin: Merging models modulo permutation symmetries. In The Eleventh International Conference on Learning Representations, 2023. Zhiqiang Cai, Jingshuang Chen, and Min Liu. Least-squares ReLU neural network (LSNN) method for linear advection-reaction equation. Journal of Computational Physics, 443:110514, 2021. Peter J Cameron. Permutation groups. Number 45. Cambridge University Press, 1999. Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th international conference on Machine learning, pp. 129–136, 2007. Shiyi Chen and Gary D Doolen. Lattice boltzmann method for fluid flows. Annual review of fluid mechanics, 30(1):329–364, 1998. Tianping Chen and Hong Chen. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE Transactions on Neural Networks, 6(4):911–917, 1995. Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pp. 2990–2999. PMLR, 2016. George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314, 1989. Rahim Entezari, Hanie Sedghi, Olga Saukh, and Behnam Neyshabur. The role of permutation invariance in linear mode connectivity of neural networks. In International Conference on Learning Representations, 2021. Johannes Feldmann, Nathan Youngblood, Maxim Karpov, Helge Gehring, Xuan Li, Maik Stappers, Manuel Le Gallo, Xin Fu, Anton Lukashchuk, Arslan Sajid Raja, et al. Parallel convolutional processing using an integrated photonic tensor core. Nature, 589(7840):52–58, 2021. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In Proceedings of the International Conference on Learning Representations, 2019. Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. Linear mode connectivity and the lottery ticket hypothesis. In International Conference on Machine Learning, pp. 3259–3269. PMLR, 2020. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. JMLR Workshop and Conference Proceedings, 2010. Ian Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In International conference on machine learning, pp. 1319–1327. PMLR, 2013. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. In Advances in Neural Information Processing Systems, volume 27, 2014. Wolfgang Hackbusch. Multi-grid methods and applications, volume 4. Springer Science & Business Media, 2013. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034, 2015. John E Hopcroft, Jeffrey D Ullman, and Alfred Vaino Aho. Data structures and algorithms, volume 175. Addison-wesley Boston, MA, USA., 1983.
oAMArMMQxb
Essentially, this work side-steps the problems of learning the score function alltogether: despite citing the seminal work from Hyvarinen 2005, the authors chose not to bring to the readers' attention the fact that in practical cases, even the vanilla score can be hard to compute, as it can require costly computations of the trace of the Hessian of the parametric score, which derives from a rewrite of the Fisher divergence. So, I think it is important to mention that the parametric score function does not come for free in general settings.
Sampling Multimodal Distributions with the Vanilla Score: Benefits of Data-Based Initialization Frederic Koehler Department of Statistics and Data Science Institute University of Chicago Chicago, IL 60637 fkoehler@uchicago.edu Thuy-Duong Vuong Department of Computer Science Stanford University Stanford, CA 94305 tdvuong@stanford.edu Abstract There is a long history, as well as a recent explosion of interest, in statistical and generative modeling approaches based on score functions — derivatives of the log-likelihood of a distribution. In seminal works, Hyvärinen proposed vanilla score matching as a way to learn distributions from data by computing an estimate of the score function of the underlying ground truth, and established connections between this method and established techniques like Contrastive Divergence and Pseudolikelihood estimation. It is by now well-known that vanilla score matching has significant difficulties learning multimodal distributions. Although there are various ways to overcome this difficulty, the following question has remained unanswered — is there a natural way to sample multimodal distributions using just the vanilla score? Inspired by a long line of related experimental works, we prove that the Langevin diffusion with early stopping, initialized at the empirical distribution, and run on a score function estimated from data successfully generates natural multimodal distributions (mixtures of log-concave distributions). 1 Introduction Score matching is a fundamental approach to generative modeling which proceeds by attempting to learn the gradient of the log-likelihood of the ground truth distribution from samples ("score function") [Hyvärinen (2005)]. This is an elegant approach to learning energy-based models from data, since it circumvents the need to compute the (potentially intractable) partition function which arises in Maximum Likelihood Estimation (MLE). Besides the original version of the score matching method (often referred to as vanilla score matching), many variants have been proposed and have seen dramatic experimental success in generative modeling, especially in the visual domain (see e.g. Song & Ermon (2019); Song et al. (2020b); Rombach et al. (2022)). In this work, we revisit the vanilla score matching approach. It is known that learning a distribution via vanilla score matching generally fails in the multimodal setting (Wenliang et al. 2019; Song & Ermon 2019; Koehler et al. 2022). However, there are also many positive aspects of modeling a distribution with the vanilla score. To name a few: 1. Simplicity to fit: computing the best estimate to the vanilla score is easy in many situations. For example, there is a simple closed form solution the class of models being fit is an exponential family (Hyvärinen 2007b), and this in turn lets us compute the best fit in a kernel exponential family (see e.g. Sriperumbudur et al. (2017); Wenliang et al. (2019)). 2. Compatibility with energy-based models: for a distribution \( p(x) \propto \exp(E(x)) \), the vanilla score function is \( \nabla E(x) \) so it is straightforward to go between the energy and the score function. This is related to the previous point (why exponential families are simple to score match), and also why it is easy to implement the Langevin chain for sampling an energy-based model. 3. Statistical inference: in cases where vanilla score matching does work well, it comes with attractive statistical features like \( \sqrt{n} \)-consistency, asymptotic normality, relative efficiency... guarantees compared to the MLE, etc. — see e.g. Barp et al. (2019); Forbes & Lauritzen (2015); Koehler et al. (2022); Song et al. (2020a). In addition, score matching is also closely related to other celebrated methods for fitting distributions which have been successfully used for a long time in statistics and machine learning — pseudo-likelihood estimation (Besag 1975) and contrastive divergence training (Hinton 2002). (See e.g. Hyvärinen (2007a); Koehler et al. (2022).) For these reasons, we would like to better understand the apparent failure of score matching in the multimodal setting. In this work, we study score matching in the context of the most canonical family of multimodal distributions — mixtures of log-concave distributions. (As a reminder, any distribution can be approximated by a sufficiently large mixture, see e.g. Wasserman (2006).) While vanilla score matching itself does not correctly estimate these distributions, we show that the trick of using “data-based initialization” when sampling, which is well-known in the context of CD/MLE training of energy based models (see e.g. Hinton (2012); Xie et al. (2016) and further references below), provably corrects the bias of any model which accurately score matches the ground truth distribution. 1.1 Our Results We now state our results in full detail. We are interested in the question of generative modeling using the vanilla score function. Generally speaking, there is some ground truth distribution $\mu$, which for us we will assume is a mixture of log-concave distributions, and we are interested in outputting a good estimate $\hat{\mu}$ of it. We show that this is possible provided access to: 1. A good estimate of the score function of $\nabla \log \mu$. (In many applications, this would be learned from data using a procedure like score matching.) 2. A small number of additional samples from $\mu$, which are used for data-based initialization. To make the above points precise, the following is our model assumption on $\mu$: **Assumption 1.** We assume probability distribution $\mu$ is a mixture of $K$ log-concave components: explicitly, $\mu = \sum_{i=1}^{K} p_i \mu_i$ for some weights $p_1, \ldots, p_K$ s.t. $p_i > 0$ and $\sum_i p_i = 1$. Furthermore, we suppose the density of each component $\mu_i$ is $\alpha$ strongly-log-concave and $\beta$-smooth with $\beta \geq 1$ i.e. $\alpha I \preceq -\nabla^2 \log \mu_i(x) \preceq \beta I$ for all $x$. We define the notation $p_* = \min_i p_i$ and $\kappa = \beta/\alpha \geq 1$. **Remark 1.** The assumption that $\mu_i$ is $\alpha$-strongly log-concave and $\beta$-smooth is the most standard setting where the Langevin dynamics are guaranteed to mix rapidly (see e.g. Dalalyan (2017)). and the following captures formally what we mean by a “good estimate” of the score function: **Definition 1.** For $\mu$ a probability distribution with smooth density $\mu(x)$, an $\epsilon_{\text{score}}$-accurate estimate of the score in $L_2(\mu)$ is a function $s$ such that $$\mathbb{E}_{x \sim \mu}[||s(x) - \nabla \log \mu(x)||^2] \leq \epsilon_{\text{score}}^2.$$ (1) As discussed in the below remark, this is the standard and appropriate assumption to make when score functions are learned from data. There are also other settings of interest where the ground truth score function is known exactly (e.g. $\mu$ is an explicit energy-based model which we have access to, and we want to generate more samples from it) in which case we can simply take $\epsilon_{\text{score}} = 0$. **Remark 2.** Assumption (1) says that on average over a fresh sample from the distribution, $s(x)$ is a good estimate of the true score function $\nabla \log \mu(x)$. This is the right assumption when score functions are estimated from data, because it is generally impossible to learn the score function far from the support of the true distribution. See the previous work e.g. Chen et al. (2023); Lee et al. (2022a,b); Block et al. (2020) where the same distinction is discussed in more detail. Given a class of functions which contains a good model for the true score function and has a small Rademacher complexity compared to the number of samples, the function output by vanilla score matching will achieve small $L_2$ error (see proof of Theorem 1 of Koehler et al. (2022)). In --- 1We can always re-scale the domain so that $\beta \geq 1$. 2For example, one use case of generative modeling is when we have the ground truth and want to accelerate an existing sampler which is expensive to run, see e.g. Albergo et al. (2021); Lawrence & Yamauchi (2021). particular, this can be straightforwardly applied to parametric families of distributions like mixtures of Gaussians. We would also generally expect this assumption to be satisfied when the distribution is successfully learned via other learning procedures, such as MLE/contrastive divergence. (See related simulation in Appendix H.) We show the distribution output by Langevin dynamics on an approximate score function will be close to the ground truth provided (1) we initialize the Langevin diffusion from the empirical distribution of samples, and (2) we perform early stopping of the diffusion, so that it does not reach its stationary distribution. Formally, let the Langevin Monte Carlo (LMC, a.k.a. discrete-time Langevin dynamics) chain with initial state $X_0$, score function $s$, and step size $h > 0$ be defined by the recursion $$X_{h(i+1)} = X_{hi} + h s(X_{hi}) + \sqrt{2h} \Delta_{hi}$$ where each noise variable $\Delta_{hi} \sim N(0, I)$ is independent of the previous ones. Our main result gives a guarantee for sampling with LMC started from a small set of samples and run for time $T$: **Theorem 1.** Let $\epsilon_{TV} \in (0, 1/2)$. Suppose $\mu$ is a mixture of strongly log-concave measures as in Assumption 1 and $s$ is a function which estimates the score of $\mu$ within $L_2$ error $\epsilon_{score}$ in the sense of Definition 1. Let $$T = \tilde{\Theta}\left(\frac{\exp(K)dk}{p_*\epsilon_{TV}}\right)^{O_K(1)}, \quad h = \tilde{\Theta}\left(\frac{\epsilon_{TV}^4}{\beta K^2 \exp(K)^4 d^3 T}\right).$$ Let $U_{\text{sample}}$ be a set of $M$ i.i.d. samples from $\mu$ and $\nu_{\text{sample}}$ be the uniform distribution over $U_{\text{sample}}$. Suppose that $M = \Omega(p_*^{-2} \epsilon_{TV}^4 K^4 \log(K/\epsilon_{TV}) \log(K/\tau))$, and that $$\epsilon_{score} \leq \frac{p_*^{1/2} \sqrt{h} \epsilon_{TV}^2}{7T} = \tilde{\Theta}\left(\frac{p_*^{1/2} \epsilon_{TV}^4}{\beta K^2 \exp(K)^2 d^{3/2} T^{3/2}}\right).$$ Let $(X_{\nu_{\text{sample}} n})_{n \in \mathbb{N}}$ be the LMC chain with score $s$ and step size $h$ initialized at $\nu_{\text{sample}}$. Then with probability at least $1 - \tau$ over the randomness of $U_{\text{sample}}$, the conditional law $\hat{\mu} = \mathcal{L}(X_{\nu_{\text{sample}} T} | U_{\text{sample}})$ satisfies $$d_{TV}(\hat{\mu}, \mu) \leq \epsilon_{TV}. \quad (2)$$ We now make a few comments to discuss the meaning of the result. Conclusion (2) says that we have successfully found an $\epsilon_{TV}$-close approximation of the ground truth distribution $\mu$. Unpacking the definitions, it says that with high probability over the sample set: (1) picking a uniform sample from the training set, and (2) running the Langevin chain for time $T$ will generate an $\epsilon_{TV}$-approximate sample from the distribution $\mu$. Note in particular that we can draw as many samples as we like from the distribution without needing new training data. The fact that this is conditional on the dataset is a key distinction: the marginal law of any element of the training set would be $\mu$, but its conditional law is a delta-distribution at that training sample, and the conditional law is what is relevant for generative modeling (being able to draw new samples from the right distribution). See also Figure 1 for a simulation which helps illustrate this distinction. **Remark 3.** Provided the number of components in the mixture is $O(1)$, i.e. upper bounded by a constant, the dependence on all other parameters is polynomial or logarithmic. It is possible to remove the dependence on the minimum weight $p_*$ completely — see Corollary 2 in Appendix H. **Remark 4.** It turns out Theorem 1 is a new result even in the very special case that the ground truth is unimodal. The closest prior work is Theorem 2.1 of Lee et al. (2022a), where it was proved that the Langevin diffusion computed using an approximate score function succeeds to approximately sample from the correct distribution given a (polynomially-)warm start in the $\chi^2$-divergence. However, while the empirical distribution of samples is a natural candidate for a warm start, in high dimensions it will not be anywhere close to the ground truth distribution unless we have an exponentially large (in the dimension) number of samples, due to the “curse of dimensionality”, see e.g. Wasserman (2006). ### 1.2 Further Discussion **One motivation: computing score functions at substantial noise levels can be computationally difficult.** In some cases, computing/learning the vanilla score may be a substantially easier task than alternatives; for example, compared to learning the score function for all noised versions of the ground truth (as used in diffusion models like Song & Ermon (2019)). As a reminder, denoising diffusion models are based on the observation that the score function of a noised distribution $N(0, \sigma^2 I) * p$ exactly corresponds to a Bayesian denoising problem: computing the posterior mean on $X \sim p$ given a noisy observation $Y \sim N(x, \sigma^2 I)$ Vincent (2011); Block et al. (2020), via the equation $$y + \sigma^2 \nabla \log(N(0, \sigma^2 I) * p)(y) = \mathbb{E}[X | Y = y].$$ Unlike the vanilla score function this will not be closed form for most energy-based models; the optimal denoiser might be complex when the signal is immersed in a substantive amount of noise. For example, results in the area of computational-statistical gaps tell us that for certain values of the noise level $\sigma$ and relatively simple distributions $p$, approximate denoising can be average-case computationally hard under widely-believed conjectures. For example, let $p$ be a distribution over matrices of the form $N(rr^T, \epsilon^2)$ with $r$ a random sparse vector and $\epsilon > 0$ small. Then the denoising problem for this distribution will be “estimation in the sparse spiked Wigner model”. In this model, for a certain range of noise levels $\sigma$ performing optimal denoising is as hard as the (conjecturally intractible) “Planted Clique” problem Brennan et al. (2018); in fact, even distinguishing this model from a pure noise model with $r = 0$ is computationally hard despite the fact it is statistically possible — see the reference for details. So unless the Planted Clique conjecture is false, there is no hope of approximately computing the score function of $p * N(0, \sigma^2)$ for these values of $\sigma$. On the other hand, there is no computational obstacle to computing the score of $p$ itself provided $\epsilon > 0$ is small — denoising is only tricky once the noise level becomes sufficiently large. **Related Experimental Work.** As mentioned before, many experimental works have found success generating samples, especially of images, by running the Langevin diffusion (or other Markov chain) for a small amount of time. One aspect which varies in these works is how the diffusion is initialized. To use the terminology of Nikkamp et al. (2020), the method we study uses an informative/data-based initialization similar to contrastive divergence Hinton (2012); Gao et al. (2018); Xie et al. (2016). While in CD the early stopping of the dynamics is usually motivated as a way to save computational resources, the idea that stopping the sampler early can improve the quality of samples is consistent with experimental findings in the literature on energy-based models. As the authors of Nikkamp et al. (2020) say, “it is much harder to train a ConvNet potential to learn a steady-state over realistic images. To our knowledge, long-run MCMC samples of all previous models lose the realism of short-run samples.” One possible intuition for the benefit of early stopping, consistent with our analysis and simulations, is that it reduces the risk of stepping into low-probability regions where the score function may be poorly estimated. Some works have also found success using random/uninformative initializations with appropriate tweaks Nikkamp et al. (2019, 2020), although they still found informative initialization to have some advantages — for example in terms of output quality after larger numbers of MCMC steps. Finally, we recall that the success of many recent experimental works which fit score functions with neural networks (e.g. Song & Ermon (2019); Song et al. (2020b); Rombach et al. (2022); Ho et al. (2020)). **Related Theoretical Work.** The works Block et al. (2020); Lee et al. (2022a) established results for learning unimodal distributions (in the sense of being strongly log-concave or satisfying a log-Sobolev inequality) via score matching, provided the score functions are estimated in an $L_2$ sense. The work Koehler et al. (2022) showed that the sample complexity of vanilla score matching is related to the size of a restricted version of the log-Sobolev constant of the distribution, and in particular proved negative results for vanilla score matching in many multimodal settings. The works Lee et al. (2022b); Chen et al. (2023) proved that even for multimodal distributions, annealed score matching will successfully learn the distribution provided all of the annealed score functions can be successfully estimated in $L_2$. In our work we only assume access to a good estimate of the vanilla score function, but still successfully learn the ground truth distribution in a multimodal setting. In the sampling literature, our result can be thought of establishing a type of metastability statement, where the dynamics become trapped in local minima for moderate amounts of time — see e.g. Tzen et al. (2018) for further background. Also in the sampling context, the works Lee et al. (2018); Ge et al. (2018) studied a related problem, where the goal is to sample a mixture of isotropic Gaussians given black-box access to the score function (which they do via simulated tempering). This problem ends up to be different to the ones arising in score matching: they need exact knowledge of the true score function (far away from the support of the distribution), but they do not have access to training... data from the true distribution. As a consequence of the differing setup, they prove an impossibility result (Ge et al., 2018, Theorem F.1) for a mixture of two Gaussians with covariances $I$ and $2J$ (it will not be possible to find both components), but our result proves this is not an issue in our setting. **Questions for future work.** In our result, we proved the first bound for sampling with the vanilla score, estimated from data, which succeeds in the multimodal setting, but it is an open question if the dependence on the number of components is optimal; it seems likely that the dependence can be improved, at least in many cases. Finally, it is interesting to ask what the largest class of distributions our result can generalize to — with data-based initialization, multimodality itself is no longer an obstruction to sampling with Langevin from estimated gradients, but are there other possible obstructions? ## 2 Technical Overview We first review some background and notation which is helpful for discussing the proof sketch. We leave complete proofs of all results to the appendices. **Notation.** We use standard big-Oh notation and use tildes, e.g. $\tilde{O}(\cdot)$, to denote inequality up to log factors and $O_B(\cdot)$ to denote an inequality with a constant allowed to depend on $B$. We let $d_{TV}(\mu, \nu) = \sup_A |\mu(A) - \nu(A)|$ be the usual total variation distance between probability measures $\mu$ and $\nu$ defined on the same space, where the supremum ranges over measurable sets. Given a random variable $X$, we write $\mathcal{L}(X)$ to denote its law. **Log-Sobolev inequality.** We say probability distribution $\pi$ satisfies a log-Sobolev inequality (LSI) with constant $C_{LS}$ if for all smooth functions $f$, $\mathbb{E}_\pi[f^2 \log(f^2/\mathbb{E}_\pi[f^2])] \leq 2C_{LS}\mathbb{E}_\pi[|\nabla f|^2]$. Due to the Bakry-Emery criterion, if $\pi$ is $\alpha$-strongly log-concave then $\pi$ satisfies LSI with constant $C_{LS} = 1/\alpha$. LSI is equivalent to a statement about mixing of the Langevin dynamics — if we let $\pi_t$ denote the law of the diffusion at time $t$ then an LSI is equivalent to the inequality $$D_{KL}(\pi_t || \pi) \leq \exp(-2t/C_{LS})D_{KL}(\pi_0 || \pi)$$ holding for an arbitrary initial distribution $\pi_0$. Here $D_{KL}(P,Q) = \mathbb{E}_P[\log \frac{dP}{dQ}]$ is the Kullback-Liebler divergence. See Bakry et al. (2014); Van Handel (2014) for more background. **Stochastic calculus.** We will need to use stochastic calculus to compare the behavior of similar diffusion processes — see Karatzas & Shreve (1991) for formal background. Let $(X_t)_{t \geq 0}$ and $(Y_t)_{t \geq 0}$ be two Ito processes defined by SDEs: $dX_t = s_1(X_t)dt + dB_t$ and $dY_t = s_2(X_t)dt + dB_t$. Let $P_T, Q_T$ be the laws of the paths $(X_t)_{t \in [0,T]}$ and $(Y_t)_{t \in [0,T]}$ respectively. The following follows by Girsanov’s theorem (see Chen et al., 2023, Eq. (5.5) and Theorem 9)) $$d_{TV}(Y_T, X_T)^2 \leq d_{TV}(Q_T, P_T)^2 \leq \frac{1}{2}\mathbb{E}_{Q_T}\left[\int_0^T \|s_2(Y_t) - s_1(Y_t)\|^2 dt\right]$$ In particular, this is useful to compare continuous and discrete time Langevin diffusions. If $(Y_t)$ be the continuous Langevin diffusion with score function $s$, and $(X_t)$ is a linearly interpolated version of the discrete-time Langevin dynamics defined by $dX_t = s(X_{[t/h]h})dt + dB_t$, then $$d_{TV}(Y_T, X_T)^2 \leq \frac{1}{2}\mathbb{E}_{Q_T}\left[\int_0^T \|s(Y_t) - s(Y_{[t/h]h})\|^2 dt\right]$$ (3) ### 2.1 Proof Sketch **High-level discussion.** At a high level, our argument proceeds by (1) grouping the components of the mixture into larger “well-connected” pieces, and (2) showing that the process mixes well within each of these pieces, while preserving the correct relative weight of each piece. One of the challenges in proving our result is that, contrary to the usual situation in the analysis of Markov chains (as in e.g. Bakry et al., 2014; Levin & Peres, 2017), we do not want to run the Langevin diffusion until it mixes to its stationary distributions. If we ran the process until mixing, then we would be performing the vanilla score matching procedure which provably fails in most multimodal settings because it incorrectly weights the different components [Koehler et al., 2022]. So what we want to do is prove the process succeeds at some intermediate time $T$ (See Figure 1 for a simulation illustrating this.) To build intuition, consider the special case where all of the components in the mixture distributions are very far from each other. In this case, one might guess that taking $T$ to be the maximum of the mixing times of each of the individual components will work. Provided there are enough samples in the dataset, the initialization distribution will accurately model the relative weights of the different clusters in the data, and running the process up to time $T$ will approximately sample from the cluster that the initialization is drawn from. We could hope to prove the result by arguing that the dynamics on the mixture is close to the dynamics on one of the mixture components. Some challenges to overcome in the analysis. This is the right intuition, but for the general case the behavior of the dynamics is more complicated. When components are close, the score function of the mixture distribution may not be close to the score function of either component in the region of overlap; relatedly, particles may cross over between components. Also, the following remark shows that natural variants of our main theorem are actually false. **Remark 5.** We might think that initializing from the center of each mixture component would work just as well as initializing from samples. This is fine if the clusters are all very far from each other, but wrong in general. If the underlying mixture distribution is $\frac{1}{2}N(0, I_d) + \frac{1}{2}N(0, 2I_d)$ and the dimension $d$ is large, then the first component will have almost all of its mass within distance $O(1)$ of a sphere of radius $\sqrt{d}$ and the second component will similarly concentrate about a sphere of radius $\sqrt{2d}$. (See Theorem 3.1.1 of Vershynin [2018].) As a consequence, the dynamics initialized at the origin will mix within the shell of radius $\sqrt{d}$ but take $\exp(\Omega(d))$ time to cross to the larger $\sqrt{2d}$ shell. (This can be proved by observing that the gap between the two spheres forms a “bottleneck” for the dynamics, see Levin & Peres [2017].) In contrast, if we initialize from samples then approximately half of them will lie on the outer shell and, as we prove, the dynamics mix correctly. We now proceed to explain in more detail how we prove our result. We start with the analysis of an idealized diffusion process, and then through several comparison arguments establish the result for the real LMC algorithm. **Analysis of idealized diffusion.** To start out, we analyze an idealized process in which: 1. The score function $\nabla \log \mu$ is known exactly. (Our result is still new in this case.) 2. The dynamics is the continuous-time Langevin diffusion given by the Ito process \[ d\tilde{X}_t = \nabla \log \mu(\tilde{X}_t) \, dt + \sqrt{2} \, dB_t. \] This is the scaling limit of the discrete-time LMC chain as we take the step size $h \to 0$, where $dB_t$ is the differential of a Brownian motion $B_t$. 3. For purposes of exposition, we make the fictitious assumption that the ground truth distribution $\mu$ is supported in a ball of radius $R$. This will not be literally true, but for sufficiently large $R$ $\mu$ will be almost entirely contained within a radius $R$ ball. (In the supplement, we handle this rigorously using concentration, see e.g. proof of Lemma 11 of Appendix F). Additionally, for the purpose of illustration, in this proof sketch we assume the target distance in TV is 0.01 and consider the case where there are two $\alpha$-strongly log concave and $\beta$-smooth components $\mu_1$ and $\mu_2$, and $\mu = \frac{1}{2}\mu_1 + \frac{1}{2}\mu_2$. After we complete the proof sketch for this setting, we will go back and explain how to generalize the analysis to arbitrary mixtures, handle the error induced by discretization, and finally make the analysis work with an $L_2$ estimate of the true score function. **Overlap parameter.** We define \[ \delta_{12} := 1 - d_{TV}(\mu_1, \mu_2) = \int \min\{\mu_1(x), \mu_2(x)\} \, dx \] as a quantitative measure of how much components 1 and 2 overlap; for example, $\delta_{12} = 1$ iff $\mu_1$ and $\mu_2$ are identical. The analysis splits into cases depending on whether $\delta_{12}$ is large; we let $\delta > 0$ be a parameter which determines this split and which will be optimized at the end. High overlap case (Appendix C). If $\mu_1$ and $\mu_2$ has high overlap, in the sense that $\delta_{12} \geq \delta$, then we show that $\mu$ satisfies a log Sobolev inequality with constant at most $O(1/(\alpha \delta))$, by applying our Theorem 2, an important technical ingredient which is discussed in more detail below. Thus for a typical sample $x$ from $\mu$, the continuous Langevin diffusion $(X_t^{\delta_x})_{t \geq 0}$ with score function $\nabla \log \mu$ initialized at $x$ converges to $\mu$ i.e. $d_{TV}(\mathcal{L}(X_t^{\delta_x}), \mu) \leq \epsilon$ for $T \geq \Omega(\frac{1}{\alpha \delta} \log(d e^{-1}))$. Low overlap case (Appendix F, Lemma 7). When $\mu_1$ and $\mu_2$ have small overlap i.e. $\delta_{12} \leq \delta$, we will show that for $x \sim \mu$, with high probability, the gradient of the log-likelihood of the mixture distribution $\mu$ at $x$ is close to that of one of the components $\mu_1, \mu_2$ (Appendix F, T). This is because, supposing that $||x|| \leq R$, for $i \in \{1, 2\}$ we can upper bound $$||\nabla \log \mu(x) - \nabla \log \mu_i(x)|| \leq 2\beta R \left(1 - \frac{\mu_i(x)}{\mu_1(x) + \mu_2(x)}\right),$$ and low overlap implies that $\min_i \left(1 - \frac{\mu_i(x)}{\mu_1(x) + \mu_2(x)}\right)$ is small for typical $x \sim \mu$. Consider the continuous Langevin diffusion $(X_t^{\delta_x})$ initialized at $\delta_x$ i.e. $X_0 = x$. Observe that the marginal law of $X_t^{\delta_x}$ where $x \sim \mu$ is exactly $\mu$, since $\mu$ is the stationary distribution of the Langevin diffusion. Let $H > 0$ be a parameter to be tuned later. The above discussion and Markov’s inequality allows us to argue that for a typical sample $x$, the gradient of the log-likelihood of $\mu$ at $X_{nH}^{\delta_x}$ is close to that of either components $\mu_1, \mu_2$ with high probability. Next, we perform a union bound over $n \in \{0, \cdots, N - 1\}$ and bound the drift $||\nabla \log \mu(x) - \nabla \log \mu_i(x)||$ in each small time interval $[nH, (n+1)H]$. By doing so, we can argue that for a typical sample $x \sim \mu$, with probability at least $1 - \epsilon^{-1}\beta RN\delta_{12}$ over the randomness of the Brownian motion driving the Langevin diffusion, the gradient of the log-likelihood at $X_t^{\delta_x}$ for $t \in [0, NH]$ is close to that of the component distribution $\mu_i$ closest to the initial point $x$ (see Proposition 26 of Appendix F). In other words, assuming that the initial point $x$ satisfies $\mu_1(x) \geq \mu_2(x)$ and letting $T = NH$, we can show that with high probability, $$\sup_{t \in [0,T]} ||\nabla \log \mu(X_t^{\delta_x}) - \nabla \log \mu_1(X_t^{\delta_x})|| \leq 1.1\epsilon.$$ This allows us, using (3), to compare our Langevin diffusion with the one with score function $\nabla \log \mu_1$ and show the output at time $T$ is approximately a sample from $\mu_1$. In a typical set $U_{\text{sample}}$ of i.i.d. samples from $\mu$, roughly 50% of the samples $x \in U_{\text{sample}}$ satisfy $\mu_1(x) \geq \mu_2(x)$ and the other 50% samples satisfy $\mu_2(x) \geq \mu_1(x)$, thus the Langevin dynamics $(X_t^{\nu_{\text{sample}}})_{t \geq 0}$ initialized at the uniform distribution $\nu_{\text{sample}}$ over $U_{\text{sample}}$ will be close to $\frac{\mu_1 + \mu_2}{2} = \mu$ after time $T$ provided we set $H, T, \epsilon, \delta$ appropriately. Concluding the idealized analysis. Either $\delta_{12} \geq \delta$ in which case the high-overlap analysis above based on the log-Sobolev constant succeeds, or $\delta_{12} < \delta$ in which case the low-overlap analysis succeeds. Optimizing over $\delta$, we find that in either case, with high probability over the set $U_{\text{sample}}$ of samples from $\mu$, for $t \geq \tilde{\Omega}((\beta R)^3/\alpha^{3/2})$ we have $$d_{TV}(\mathcal{L}(X_t^{\nu_{\text{sample}}}|U_{\text{sample}}), \mu) \leq 0.01$$ as desired. Generalizing idealized analysis to arbitrary mixtures. (Appendix F, Theorem 5). When there are more than two components, we can generalize this analysis — the key technical difficulty, alluded to earlier, is analyzing the overlap between different mixture components. We do this by defining, for each $\delta > 0$, a graph $G^\delta$ where there is an edge between $i, j \in [K]$ when $\delta_{ij} := 1 - d_{TV}(\mu_i, \mu_j) \leq \delta$. As long as the minimum of the weights $p_* := \min_i p_i$ is not too small, each connected component $C$ of $G^\delta$ is associated with a probability distribution $\mu_C = \sum_{i \in C} p_i \mu_i / \sum_{i \in C} p_i$ that has log Sobolev constant on the order of $O_K(p_*^{-1}(1/\alpha \delta))$. This follows as LSI yields exponential convergence in KL-divergence. While the KL-divergence of the initialization $\delta_x$ with respect to $\mu$ is unbounded, we can bound the KL-divergence of $X_h^{\delta_x}$ for some small $h$. Suppose for a moment that the connected components are well separated compared to the magnitude of \( \delta \). More precisely, suppose that for \( i,j \) in different connected components and some \( \delta > 0 \) we have \[ \delta_{ij} \leq f(\delta) := \Theta \left( \frac{(\alpha \delta)^{3/2}}{(\beta R)^3} \right). \] (4) Then, a direct generalization of the argument for two components shows that for a typical set \( U_{\text{sample}} \) of i.i.d. samples from \( \mu \), the continuous Langevin diffusion \( (\bar{X}_t^{\nu_{\text{sample}}})_{t \geq 0} \) initialized at the uniform distribution over \( U_{\text{sample}} \) converges to \( \mu \) after time \( T_\delta = (\alpha \delta)^{-1} \). It remains to discuss how we select \( \delta \) so that (4) is satisfied. We consider a decreasing sequence \( 1 = \delta_0 > \delta_1 > \cdots > \delta_{K-1} \) where \( \delta_{r+1} = f(\delta_r) \) as in Eq. (4). Let \( G^r := G^{\delta_r} \). If any two vertices from different connected components of \( G^r \) have overlap at most \( \delta_{r+1} \), then the above argument applies. Otherwise, \( G^{r+1} \) must have one less connected component than \( G^r \), and since \( G^0 \) has at most \( K \) connected components, \( G^{K-1} \) must have 1 connected component and the above argument applies to it. Thus, in all cases, the distribution of \( \bar{X}_{T_{K-1}}^{\nu_{\text{sample}}} \) is close to \( \mu \) in total variation distance. **Discretization analysis.** (Appendix G, Lemma 7.4) We now move from a continuous-time to discrete-time process. Let \( (X_{nh})_{n \in \mathbb{N}} \) and \( (\tilde{X}_t)_{t \geq 0} \) be respectively the LMC with step size \( h \) and the continuous Langevin diffusion. Both are with score function \( \nabla \log \mu \) and have the same initialization. By an explicit calculation, we can bound \( ||\nabla^2 \log \mu(x)||_{OP} \) along the trajectory of the continuous process. This combined with the consequence of Girsanov’s theorem (3) allows us to bound the total variation distance between the continuous \( (\tilde{X}_t) \) and discretized \( (X_{nh}) \) processes. For appropriate choices of step size \( h \) and time \( T = Nh \), using triangle inequality and the bound \( d_{TV}(\tilde{X}_T, \mu) \), we conclude that the discretized process \( X_{Nh} \) is close to \( \mu \). **Sampling with an \( L_2 \)-approximate score function.** (Appendix G) In many cases, score functions are learned from data, so we only have access to an \( L_2 \)-estimate \( s \) of the score such that \( \mathbb{E}_\mu[||s(x) - \nabla \log \mu(x)||^2] \leq \epsilon_{\text{score}}^2 \). We now describe how to make the analysis work in this setting. Using Girsanov’s theorem, we can bound the total variation distance between the LMC \( (X_{nh}^{s,\mu})_{n \in \mathbb{N}} \) initialized at \( \mu \) with score estimate \( s \) and the continuous Langevin diffusion \( (\bar{X}_t^{\mu})_{n \in \mathbb{N}} \) with true score function \( \nabla \log \mu \), thus we can bound the probability that the LMC \( (X_{nh}^{s,\mu})_{n=0,\cdots,N-1} \) hits the bad set \[ B_{\text{score}} := \{ x : ||s(x) - \log \mu(x)|| \geq \epsilon_{\text{score},1} \}. \] (The idea of defining a “bad set” is inspired by the analysis of Lee et al. (2022a).) Similar to the argument for the continuous process, let \( X_{nh}^{s,\nu_{\text{sample}}} \) denote the LMC with score function \( s \) and step size \( h \) initialized at the empirical distribution \( \nu_{\text{sample}} \). Since we know that \( X_{nh}^{s,\mu} \) avoids the bad set and that \( \mathcal{L}(X_{nh}^{s,\mu}) = \mathbb{E}_{U_{\text{sample}} \sim \mu \otimes M} [\mathcal{L}(X_{nh}^{s,\nu_{\text{sample}}})] \), we have by Markov’s inequality that for a typical \( U_{\text{sample}} \), with high probability over the randomness of the Brownian motion, \( X_{nh}^{s,\nu_{\text{sample}}} \) also avoids the bad set \( B_{\text{score}} \) for all \( 0 \leq n < N \). Thus, we can compare \( X_{nh}^{s,\nu_{\text{sample}}} \) with the LMC with true score function \( \nabla \log \mu \), and conclude that \( \mathcal{L}(X_{Nh}^{s,\nu_{\text{sample}}}) \) is close to \( \mu \) in total variation distance. ### 2.2 Technical ingredient: log-Sobolev constant of well-connected mixtures The following theorem, which we prove in the appendix, is used in the above argument to bound the log-Sobolev constant of mixture distributions where the components have significant overlap. **Theorem 2.** Let \( I \) be a set, and consider probability measures \( \{ \mu_i \}_{i \in I} \), nonnegative weights \( (p_i)_{i \in I} \) summing to one, and mixture distribution \( \mu = \sum_i p_i \mu_i \). Let \( G \) be the graph on vertex set \( I \) where there is an edge between \( i,j \) if \( \mu_i, \mu_j \) have high overlap i.e. \[ \delta_{ij} := \int \min\{\mu_i(x), \mu_j(x)\} dx \geq \delta. \] Suppose \( G \) is connected and let \( p_* = \min_i p_i \). The mixture distribution \( \mu = \sum_i p_i \mu_i \) has log-Sobolev constant \[ C_{LS}(\mu) \leq \frac{C_{|I|,p_*}}{\delta} \max_i C_{LS}(\mu_i) \] where \( C_{|I|,p_*} = 4|I|(1 + \log(p_*^{-1}))p_*^{-1} \) only depends on \( |I| \) and \( p_* \). Figure 1: Visualization of the distribution of the Langevin dynamics after $T$ iterations when initialized at the empirical distribution and run with an approximate score function estimated from data. Orange density (rightmost figure) is the ground truth mixture of two Gaussians; the empirical distribution (leftmost figure, $T = 0$) consists of 40 iid samples from the ground truth. Langevin dynamics with step size 0.01 is run with an estimated score function, which was fit using vanilla score matching with a one hidden-layer neural network trained on fresh samples; densities (blue) are visualized using a Gaussian Kernel Density Estimate (KDE). Matching our theory, we see that the ground truth is accurately estimated at time $T = 200$ even though it is not at $T = 0$ or $\infty$. A version of Theorem which bounds the (weaker) Poincaré constant instead appeared before as Theorem 1.2 of Madras & Randall (2002), but the result for the log-Sobolev constant is new to the best of our knowledge. Compared to Chen et al. (2021), our assumption is milder than their assumption that the chi-square divergence between any two components is bounded. (For example, two non-isotropic Gaussians might have infinite chi-square divergence (see e.g., Schlichting (2019) Section 4.3)), so in that case their result doesn’t imply a finite bound on the LSI of their mixture.) Schlichting (2019) bounds LSI of $\mu = p\mu_1 + (1-p)\mu_2$ when either $\chi^2(\mu_1 || \mu_2)$ or $\chi^2(\mu_2 || \mu_1)$ are bounded; our bound applies to mixtures of more than two components. 3 SIMULATIONS In Figure 1, we simulated the behavior of the Langevin dynamics with step size 0.01 and an estimated score function initialized at the ground truth distribution on a simple 1-dimensional example, a mixture of two Gaussians. If the Langevin dynamics are run until mixing, this corresponds to exactly performing the standard vanilla score matching procedure and this will fail to estimate the ground truth distribution well, which we see in the rightmost subfigure. The empirical distribution (time zero for the dynamics) is also not a good fit to the ground truth, but as our theory predicts the early-stopped Langevin diffusion (subfigure (b)) is indeed a good estimate for the ground truth. In Figure 2, we simulated the trajectories of Langevin dynamics with step size 0.001, again with initialization from samples and a learned score function, in a 32-dimensional mixture of Gaussians. Similar to the one-dimensional example, we can see that at moderate times the trajectories have mixed well within their component, and at large times the trajectories sometimes pass through the region in between the components where the true density is very small. Additional simulations (including an experiment with Contrastive Divergence training) and information is in Appendix I. ACKNOWLEDGMENTS F.K. was supported in part by NSF award CCF1704417, NSF award IIS1908774, and N. Anari’s Sloan Research Fellowship. REFERENCES Michael S Albergo, Denis Boyda, Daniel C Hackett, Gurtej Kanwar, Kyle Cranmer, Sébastien Racaniere, Danilo Jimenez Rezende, and Phiala E Shanahan. Introduction to normalizing flows for Figure 2: 2D projected trajectories of Langevin dynamics up to $T$ iterations with step size 0.001 in a 32-dimensional mixture of Gaussians $\frac{5}{7}N(-6e_1, 1.5I) + \frac{2}{7}N(6e_1, 1.5I)$. The projection is the first two coordinates and the direction of separation of the components is the first axis direction. Langevin is initialized from the empirical distribution (15 iid samples) and run with an approximate score function learned from samples using a one hidden-layer neural network. lattice field theory. *arXiv preprint arXiv:2101.08176*, 2021. Dominique Bakry, Ivan Gentil, Michel Ledoux, et al. *Analysis and geometry of Markov diffusion operators*, volume 103. Springer, 2014. Alessandro Barp, Francois-Xavier Briol, Andrew Duncan, Mark Girolami, and Lester Mackey. Minimum stein discrepancy estimators. *Advances in Neural Information Processing Systems*, 32, 2019. Julian Besag. Statistical analysis of non-lattice data. *Journal of the Royal Statistical Society: Series D (The Statistician)*, 24(3):179–195, 1975. Adam Block, Youssef Mroueh, and Alexander Rakhlin. Generative modeling with denoising auto-encoders and langevin sampling. *arXiv preprint arXiv:2002.00107*, 2020. Matthew Brennan, Guy Bresler, and Wasim Huleihel. Reducibility and computational lower bounds for problems with planted sparse structure. In *Conference On Learning Theory*, pp. 48–166. PMLR, 2018. Hong-Bin Chen, Sinho Chewi, and Jonathan Niles-Weed. Dimension-free log-sobolev inequalities for mixture distributions. *Journal of Functional Analysis*, 281(11):109236, 2021. Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, and Anru R. Zhang. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions, 2023. Sinho Chewi, Murat A. Erdogdu, Muhan Bill Li, Ruqi Shen, and Matthew Zhang. Analysis of langevin monte carlo from poincaré to log-sobolev, 2021. Arnak S Dalalyan. Theoretical guarantees for approximate sampling from smooth and log-concave densities. *Journal of the Royal Statistical Society. Series B (Statistical Methodology)*, pp. 651–676, 2017. Persi Diaconis and Laurent Saloff-Coste. Logarithmic sobolev inequalities for finite markov chains. *The Annals of Applied Probability*, 6(3):695–750, 1996. Peter GM Forbes and Steffen Lauritzen. Linear estimating equations for exponential families with application to gaussian linear concentration models. *Linear Algebra and its Applications*, 473:261–283, 2015. Ruiqi Gao, Yang Lu, Junpei Zhou, Song-Chun Zhu, and Ying Nian Wu. Learning generative convnets via multi-grid modeling and sampling. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 9155–9164, 2018. Rong Ge, Holden Lee, and Andrej Risteski. Simulated tempering langevin monte carlo ii: An improved proof using soft markov chain decomposition. *arXiv preprint arXiv:1812.00793*, 2018.
fAGEAEQvRr
In existing works it is commonly the case that convergence/incremental learning hold with high probability (e.g.[1] Theorem 3.3), since the initialization has to be aligned with each component, otherwise it cannot make progess in some direction. Does this paper need to impose similar requirements for initialization?
GRADIENT DESCENT FOR MATRIX FACTORIZATION: UNDERSTANDING LARGE INITIALIZATION Anonymous authors Paper under double-blind review ABSTRACT In deep learning practice, large random initialization is commonly used. Understanding the behavior of gradient descent (GD) with such initialization is both crucial and challenging. This paper focuses on a simplified matrix factorization problem, delving into the dynamics of GD when using large initialization. Leveraging a novel signal-to-noise ratio argument and an inductive argument, we offer a detailed trajectory analysis of GD from the initial point to the global minima. Our insights indicate that even with a large initialization, GD can exhibit incremental learning, which coincides with experimental observations. 1 INTRODUCTION Understanding generalization and optimization in deep learning remains a pivotal and challenging area of research (Sun, 2019; Jakubovitz et al., 2019). Despite their vast model complexity, neural networks consistently exhibit remarkable generalization properties (Zhang et al., 2016). Conventional theories, such as uniform convergence, fall short in fully explaining this exceptional success, spurring a plethora of new research on generalization. One influential line of research delves into the implicit bias of gradient-based methods (Vardi, 2023). It is believed in these works that gradient-based algorithms induce an implicit bias towards solutions that generalize well. Prominent examples include Soudry et al. (2018)’s work on logistic regression, Arora et al. (2019)’s work on deep matrix factorization, and Ji & Telgarsky (2018)’s work on deep linear networks, among many others. This paper focuses on the implicit bias of gradient descent (GD) in matrix factorization. Matrix factorization acts as a simplified model for neural network study, mirroring the training of a two-layer linear network. Additionally, it is intrinsically linked to a range of engineering problems including matrix sensing, matrix completion, dictionary learning, and phase retrieval, among others (Chi et al., 2019). In recent years, researchers have studied various optimization facets of matrix factorization, encompassing topics like optimization landscape (Sun et al., 2016; 2018; Zhu et al., 2021), global convergence and the convergence rate of GD (Gunasekar et al., 2017; Ma et al., 2018; Chen et al., 2019), and the effects of random initialization (Stöger & Soltanolkotabi, 2021). General theories in non-convex optimization have also shed significant light on the matrix factorization problem. Notably, Lee et al. (2016) show that GD escapes saddle points almost surely under the strict saddle point condition. This implies the global convergence of GD for problems whose local minima are all global minima and whose saddle points are all strict. Despite these advancements in matrix factorization, the theoretical understanding of GD with large initialization remains largely unexplored. Specifically, consider the symmetric matrix factorization problem \[ X^* = \arg\min_{X \in \mathbb{R}^{d \times r}} \| \Sigma - XX^\top \|_F^2, \] (1) where \( \Sigma \in \mathbb{R}^{d \times d} \) is a positive semi-definite matrix of rank at least \( r \). The solutions of problem (1) are given by \( X^*X^{*\top} = \Sigma_r \), where \( \Sigma_r \) is the best rank \( r \) approximation of \( \Sigma \). Finding such \( X^* \) poses a non-convex optimization challenge, and much research has been undertaken to understand GD’s behavior in solving this problem or its variants. For instance, Zhu et al. (2021) demonstrate that problem (1) has no spurious local minima, possesses only strict saddle points, and satisfies a local regularity condition. Such analysis implies that GD converges to the global minima almost surely, and the convergence is at a linear rate if initialized in a local region surrounding the global minima. The global minima are unknown in practice, so researchers also examine GD with inaccurate or random initialization. Stöger & Soltanolkotabi (2021) show that if using a sufficiently small initialization, then GD behaves like a spectral method in early iterations. Based on this, the authors establish the linear global convergence rate of GD with small initialization. Furthermore, using a similar argument, Jin et al. (2023) demonstrate an incremental learning phenomenon of GD with small initialization; Eigenvectors associated with larger eigenvalues are learned first. GD with small initialization has also been studied in other works such as Ma et al. (2022); Soltanolkotabi et al. (2023) and related references. Nevertheless, the behavior of GD when initialized with large values remains less understood. Here we refer to $X_0 = \varpi N_0$ as large initialization if $\varpi$ is a positive constant independent of $d$ and $N_0$’s entries are independently distributed as $\mathcal{N}(0, \frac{1}{d})$. Correspondingly, small initialization refers to the case where $\varpi$ tends to zero as $d$ tends to infinity. Notably, when $d$ tends to infinity, the norm $\|X_0\|$ converges to a positive constant (or zero) for large (or small) initialization. Existing literature using small initialization typically assumes $\varpi$ be of order $d^{-\iota(\kappa)}$, where $\kappa > 1$ is the conditional number and $\iota(\cdot)$ is an increasing function with $\iota(\infty) = \infty$. Such small initialization, despite the solid theories, is seldom adopted in practice. For example, in deep learning, Lecun initialization (LeCun et al., 2002), Xavier initialization (Glorot & Bengio, 2010), Kaiming initialization (He et al., 2016), and many other initialization strategies all use large random initialization, i.e., $\varpi$ is a constant. Hence, despite the challenges, examining the properties of GD with large initialization is still of great importance. This paper explores the behaviors of GD with large initialization when addressing problem (1). By using novel signal-to-noise (SNR) and inductive arguments, we offer a comprehensive analysis of the GD trajectory starting from the initial point to the global minima. We show that GD with large initialization may still exhibit an incremental learning phenomenon (Jin et al., 2023; Gissin et al., 2019; Li et al., 2020). Our result also implies the fast global convergence of GD under certain transition assumptions. It is worth noting that the verification of the transition assumptions remains a problem. For convenience, we informally summarize our results below. **Theorem 1 (Informal)** Suppose $\Sigma$ is a positive semi-definite matrix with leading $r + 1$ eigenvalues strictly decreasing. Let $X_t$ be the GD sequence for problem (1) with $X_0 = \varpi N_0$, where $\varpi$ is a positive constant independent of $d$ and $N_0 \in \mathbb{R}^{d \times r}$ has independent $\mathcal{N}(0, \frac{1}{d})$ entries. Then - the GD sequence converges to the global minima almost surely (Lee et al., 2016; Zhu et al., 2021); - a comprehensive trajectory analysis of GD is given, indicating that eigenvectors associated with larger eigenvalues are learned first; - under an unverified transition assumption, GD achieves $\epsilon$-accuracy in $O(\log(\frac{1}{\epsilon}) + \log(d))$ steps. To illustrate our results more clearly, we provide a simple but representative experiment on rank-two matrix approximation. The parameters are set as follows: $d = 4000$, $r = 2$, and $\Sigma = \text{diag}(1, 0.5, e)$, where $e \in \mathbb{R}^{d-r}$ is an arithmetic sequence transitioning from 0.3 down to 0. Let $X_0 = 0.5N_0$ with the entries of $N_0$ independently drawn from $\mathcal{N}(0, \frac{1}{d})$. We compute the GD sequence $X_t$ with a step size of 0.1 and evaluate the errors $\|\Sigma_r - X_t X_t^\top\|_F$, where $\Sigma_r = \text{diag}(1, 0.5, 0, \ldots, 0)$ is the best rank-$r$ matrix approximation to $\Sigma$. In Figure 1, we plot the error curve, highlight several noteworthy points on the curve, and depict the heat maps of the first three rows and columns of $X_t X_t^\top$ at these steps. Observations reveal that GD exhibits an incremental learning phenomenon and the error curve has two types of shapes: flat and steep. To interpret the error curve displayed in Figure 1, we shall analyze the first $r$ rows of $X_t$ one by one. In particular, we will study the dynamics of the quantities $\sigma_1(u_{k,t})$ and $\sigma_1(u_{k,t}K_{k,t}^\top)$, where $u_{k,t}$ is the $k$-th row of $X_t$ and $K_{k,t}$ is the $(k+1)$-to-$d$-th rows of $X_t$. These quantities are associated with the diagonal and off-diagonal elements in the heat map of $X_t X_t^\top$. Hence, one can correspond our mathematical analysis with the dynamics of the heat maps displayed in Figure 1. Notably, our analysis on the SNR $\sigma_1^2(u_{k,t})/\sigma_1(u_{k,t}K_{k,t})$ demonstrates that the off-diagonal elements shall decrease in a geometric rate, once the signal strength $\sigma_1^2(u_{k,t})$ reaches a certain level. This motivates us to employ an inductive argument to analyze the whole convergence trajectory. Figure 1: Plot 1 shows the errors $\|\Sigma_r - X_t X_t^\top\|_F$ over iterations. Plots 2-5 show the heat maps of the top three rows and columns of $X_t X_t^\top$ at iterations $t = 0, 37, 80, 140,$ and $300$, corresponding to the red points in Plot 1. The rest of this paper proceeds as follows. Section 2 reviews the usage of SNR analysis for rank-one matrix approximation. Section 3 uses the SNR analysis to prove the local linear convergence of GD in general rank problems. In Section 4, we examine the random initialization. Specifically, Section 4.1 reviews small initialization and Section 4.2 considers large initialization and presents our main theorem. In Section 5, we provide a sketch of proof. Concluding discussions are given in Section 6 and proofs are provided in the Appendix. ## 2 SNR ANALYSIS FOR RANK-ONE MATRIX APPROXIMATION The rank-one matrix approximation is well-studied. Chen et al. (2019) demonstrated that GD with large random initialization exhibits linear convergence to the global minima, leveraging a SNR argument. Specifically, consider problem (1) with $r = 1$ and assume $\Sigma = \text{diag}(\lambda_1, \ldots, \lambda_d)$ is diagonal with decreasing diagonal elements and $\lambda_1 > \lambda_2$. Let the initial point $x_0 \in \mathbb{R}^d$ be a vector such that the first entry is non-zero and the norm $\|x_0\|$ is smaller than $2\lambda_1$. Then $x_t x_t^\top$ converges to $\text{diag}(1, 0, \ldots, 0)$ fast, where $x_t$ is given by the GD update rule $$x_t = x_{t-1} + \eta (\Sigma - x_{t-1} x_{t-1}^\top)x_{t-1},$$ and $\eta$ is the learning rate. In their analysis, Chen et al. (2019) first decompose $x_t$ as $x_t = (a_t, b_t)^\top$ with $a_t \in \mathbb{R}$ and $b_t \in \mathbb{R}^{d-1}$. Then the GD rule can be rewritten as $$a_t = a_{t-1} + \eta \lambda_1 a_{t-1} - \eta (a_{t-1}^2 + \|b_{t-1}\|^2)a_{t-1},$$ $$b_t = b_{t-1} + \eta \Sigma_{\text{res}} b_{t-1} - \eta (a_{t-1}^2 + \|b_{t-1}\|^2)b_{t-1},$$ where $\Sigma_{\text{res}} = \text{diag}(\lambda_2, \ldots, \lambda_d)$. Let $\alpha_t = |a_t|$ and $\beta_t = \|b_t\|$ and assume $\eta \lambda_1$ is smaller than some constant, say $\frac{1}{12}$. Then it is direct to derive that $$\alpha_t = (1 + \eta \lambda_1 - \eta \alpha_{t-1}^2 - \eta \beta_{t-1}^2)\alpha_{t-1},$$ $$\beta_t \leq (1 + \eta \lambda_2 - \eta \alpha_{t-1}^2 - \eta \beta_{t-1}^2)\beta_{t-1}.$$ Dividing (6) by (5), we can show that $$\frac{\beta_t}{\alpha_t} \leq \frac{1 + \eta \lambda_2 - \eta \alpha_{t-1}^2 - \eta \beta_{t-1}^2}{1 + \eta \lambda_1 - \eta \alpha_{t-1}^2 - \eta \beta_{t-1}^2} \cdot \frac{\beta_{t-1}}{\alpha_{t-1}} \leq \left(1 - \frac{\eta \Delta}{3}\right) \cdot \frac{\beta_{t-1}}{\alpha_{t-1}},$$ There is no loss of generality to assume that $\Sigma$ is diagonal because GD analysis is invariant to rotations. where $\Delta = \lambda_1 - \lambda_2$ is the eigengap and the second inequality uses that $$h(s) = \frac{1 - \eta \Delta / 2 + s}{1 + \eta \Delta / 2 + s} \leq h\left(\frac{1}{2}\right) \leq 1 - \frac{\eta \Delta}{3}, \quad \forall s \in \left[-\frac{1}{2}, \frac{1}{2}\right].$$ (8) Inequality (7) states that the ratio $\frac{\beta_t}{\alpha_t}$ will decay to zero geometrically fast. Using this, Chen et al. (2019) establish that $\beta_t$ and $\alpha_t$ converge fast to zero and $\lambda_1$ respectively. Our paper refers to this argument as a SNR analysis, and we refer to $\alpha_t$ as the signal strength and $\beta_t$ as the noise strength. 3 BENIGN INITIALIZATION Generalizing the SNR argument to general rank problems poses additional challenges. For instance, the global minima cannot be characterized by the two real numbers $\alpha_t$ and $\beta_t$. Even if we find other effective quantities representing the GD sequence, giving desired dynamic analysis as in (5) and (6) remains challenging. In essence, this issue originates from the heterogeneity in different dimensions or mathematically the non-commutativity of matrix multiplication. One way to tackle the issue is to use a benign initialization with a high initial SNR. This allows us to extend the SNR analysis to general rank problems and establish the local linear convergence of GD. Consider problem (1) with general $r$ and assume $\Sigma = \text{diag}(\lambda_1, \ldots, \lambda_d)$ is diagonal with decreasing diagonal elements and $\Delta := \lambda_r - \lambda_{r+1} > 0$. Let $X_0 \in \mathbb{R}^{d \times r}$ be an initial point and $$X_t = X_{t-1} + \eta (\Sigma - X_{t-1}X_{t-1}^\top)X_{t-1},$$ (9) where $\eta$ is the learning rate. For the SNR argument, we decompose $X_t$ as $(U_t^\top, J_t^\top)^\top$, where $U_t$ is the first $r$ rows of $X_t$ and $J_t$ is the rest $d-r$ rows of $X_t$. In analogy to the rank-one case, we may think of $U_t$ as the signal and $J_t$ as the noise, because at the global minima $U$ is non-zero while $J$ is zero. By adopting a benign initialization, we mean $\sigma_r(U_0)$ is large while $\sigma_1(J_0)$ is small. More precisely, we define the following set $$\mathcal{R} = \{X = (U, J) | \sigma_1^2(X) \leq 2\lambda_1, \sigma_r^2(U) \geq \Delta/4, \sigma_1^2(J) \leq \lambda_r - \Delta/2\}. $$ (10) The set $\mathcal{R}$ contains all the global minima of problem (1). Moreover, the SNR $\sigma_r^2(U)/\sigma_1^2(J)$ is larger than the constant $\Delta/(4\lambda_1)$ for any $X$ in $\mathcal{R}$. If we initialize GD within $\mathcal{R}$, then the sequence $X_t$ will remain in $\mathcal{R}$ and the SNR will grow fast to infinity. Consequently, we can establish the local linear convergence of GD as in Theorem 2. Theorem 2 is useful for examining random initialization. Specifically, when $X_0 \not\in \mathcal{R}$, the convergence of GD consists of two stages, the first stage when the sequence enters $\mathcal{R}$ and the final convergence stage. Only the first stage needs to be further analyzed. **Theorem 2** Suppose $\eta \leq \frac{\Delta^2}{36\lambda_1^2}$, $X_0 \in \mathcal{R}$, and $X_t$ is given by (9). Then, for small $\epsilon > 0$, we have $$\|\Sigma_r - X_tX_t^\top\| \leq \epsilon \text{ in } O\left(\frac{6}{\eta \Delta} \ln \frac{200r\lambda_3^3}{\eta \Delta^2 \epsilon^2}\right) \text{ iterations, where } \Sigma_r = \text{diag}(\lambda_1, \ldots, \lambda_r, 0, \ldots, 0).$$ **Remark 3** While our paper aims to understand large initialization in later sections, Theorem 2 is still an additional contribution of the paper. Prior works on local linear convergence either study the rank-one case (Chen et al., 2019) or require $\Sigma$ to be exact of rank $r$ (Zhu et al., 2021). Their arguments cannot be directly used to prove Theorem 2. In contrast, by employing an SNR argument, we can establish the local linear convergence for general cases. Our SNR analysis relies on a lower bound for the signal $\sigma_r^2(U_{t+1})$ and an upper bound for the noise $\sigma_1^2(J_{t+1})$. These two bounds need to be related so that the ratio of SNR$_{t+1}$ by SNR$_t$ can be analyzed. This is the challenging part of the SNR analysis. Finally, we note that although we assume $\Sigma$ is positive semi-definite for simplicity, our proof can be easily extended to general symmetric $\Sigma$. Also, it can be modified to establish the local linear convergence of GD for matrix sensing (Zhu et al., 2021). 4 RANDOM INITIALIZATION Benign initialization has limited practical utility as it requires oracle information. This is particularly true in matrix sensing scenarios when $\Sigma$ is only observed through random measurements (Stöger & Soltanolkotabi, 2021). Hence, researchers have begun to investigate random initialization. Note that by Theorem 2, the convergence analysis of GD reduces to studying how long it takes for the sequence to enter $\mathcal{R}$. Once the sequence enters $\mathcal{R}$, it will converge to the global minimum exponentially fast. 4.1 Small Random Initialization Existing works (except for the rank-one case) all consider the scenario of small random initialization. They assume \( X_0 = \varpi N_0 \), where \( N_0 \in \mathbb{R}^{d \times r} \) has independent \( \mathcal{N}(0, \frac{1}{d}) \) entries and \( \varpi \) is very small. By the concentration results, the norm \( \|X_0\| \) is of order \( O(\varpi) \). When \( \varpi \) is sufficiently small, the higher-order term \( X_t X_t^\top X_t \) in (9) becomes negligible in the early stage. Consequently, in the early stage, the GD iteration behaves like a spectral method (or a power method): \[ X_t \approx X_{t-1} + \eta \Sigma X_{t-1}. \] (11) The eigenvectors associated with larger eigenvalues will be learned faster. Using the same \( U, J \) in Section 3, we know \( \sigma_r(U_{t+1})/\sigma_r(U_t) \) is greater than \( \sigma_1(J_{t+1})/\sigma_1(J_t) \) for small \( t \), meaning that the signal strength increases faster than the noise strength. As long as we pick a sufficiently small \( \varpi \), we can show that after \( O(\log(d)) \) rounds, \( \sigma_r^2(U_t) \) will rise above \( \Delta/4 \) while \( \sigma_1(J_t) \) remains negligible. This implies that the sequence \( X_t \) will enter the region \( R \) quickly, and combined with a local linear convergence result, Stöger & Soltanolkotabi (2021) demonstrate the linear global convergence of GD. In addition, Jin et al. (2023) reveal the incremental learning behavior of GD with a small \( \varpi \). These work typically require \( \varpi = d^{-\ell(\kappa)} \) for some positive, increasing function \( \ell(\cdot) \), where \( \kappa = \lambda_1/\Delta \geq 1 \) is the conditional number. For instance, Stöger & Soltanolkotabi (2021) require \[ \varpi \lesssim \min\{d^{-1/2}, d^{-3\kappa^2}\}. \] (12) Jin et al. (2023) require an even smaller \( \varpi \). Such \( \varpi \) decays to zero fast when \( d \) increase or \( \kappa \) increases. 4.2 Large Random Initialization In sharp contrast, practitioners often use large initialization with \( X_0 = \varpi N_0 \), where \( \varpi \) is a constant independent of \( d \). For this case, the arguments in Section 3 or Section 4.1 are insufficient for building effective theories. Specifically, the initial SNR is too low to use the arguments in Section 3. Also, the initial magnitude \( \|X_0\| \) is high, rendering the arguments in Section 4.1 unfeasible. To understand large initialization, we will give a delicate dynamic analysis, corresponding to Figure 1 and related discussions in the introduction. To proceed, we first introduce some notations. Consider problem (1) with rank \( r \) and assume without loss of generality that \( \Sigma = \text{diag}(\lambda_1, \ldots, \lambda_d) \) is diagonal with decreasing diagonal elements. We assume the leading \( r+1 \) eigenvalues of \( \Sigma \) are strictly decreasing, meaning that the eigengap \( \Delta = \min_{i < r} \{\lambda_i - \lambda_{i+1}\} \) is positive. Let \( X_t \) be the GD sequence from (9) and \( X_0 \) be the initial point. We define \( u_{k,t} \) as the \( k \)-th row of \( X_t \) and \( K_{k,t} \) as the \((k+1)\)-to-\(d\)-th rows of \( X_t \). Their relationships to Figure 1 have been discussed in the introduction. Finally, to present our main theorem, we define the following quantities related to the GD trajectory. • First, we define \( t_{\text{init},1} = \min\{t \geq 0 \mid X_t \in S\} \) as the first time when \( X_t \) enters \( S \), where \[ S = \{X \in \mathbb{R}^{d \times r} \mid \sigma_1^2(X) \leq 2\lambda_1, \sigma_2^2(K_k) \leq \lambda_k - \frac{3\Delta}{4}, \forall k \leq r\}, \] (13) and \( K_k \) stands for the \((k+1)\)-to-\(d\)-rows of \( X \). Here \( S \) represents a set where the norms of \( X \) and \( K_k \) are suitably upper bounded. • Next, we define two constants \( t^* \) and \( t^\sharp \) as follows: \[ t^* = \log \left( \frac{\Delta^2}{8\lambda_1^2 + 144r^2\lambda_1} \right) / \log(1 - \eta \Delta/6), \quad t^\sharp = \log \left( \frac{\Delta}{4r} \right) / \log(1 - \eta \Delta/6). \] (14) • Finally, we define the following quantities successively until \( T_{u,r} \). o \( T_{u,k} = \min\{t \geq 0 \mid \sigma_1^2(u_{k,t+t_{\text{init},k}}) \geq \Delta/2\} \). It characterizes the time when the \( k \)-th signal strength surpasses \( \Delta/2 \) since \( t_{\text{init},k} \). o \( t_k = t_{\text{init},k} + T_{u,k} + t^* \). o \( t_k^\sharp \) is defined as the smallest integer such that \[ r(1 - \eta \Delta/6)^{t_k^\sharp} \leq \sqrt{\frac{\Delta}{8}} \min\{\sigma_1(u_{k+1,t_k+t_k^\sharp}), \sqrt{\frac{\Delta}{2}}\}. \] (15) \( t_k^\sharp \) characterizes the time when the \((k+1)\)-th signal strength is no longer smaller than a geometrically decaying sequence. These quantities represent the durations of various stages of the GD convergence. In Theorem 6, we provide upper bounds for these quantities and characterize the behavior of GD in specific time. Our result is deterministic and applicable to the case of large random initialization. **Assumption 4** Assume \( t^*_k < \infty \) for all \( k \leq r \). **Assumption 5 (Transition Assumption)** Assume \( t^*_k = O(\log(d)) \) for all \( k \leq r \). **Theorem 6** Suppose \( \eta \leq \frac{\Delta}{100\lambda_1} \), \( \sigma_1(X_0) \leq \frac{1}{\sqrt{3\eta}} \), \( X_t \) is the GD sequence, and Assumption 4 holds. Then we have 1. \( t_{\text{init},1} = O\left( \frac{1}{\eta \lambda_1} \log \frac{1}{\delta \eta \lambda_1} \right) + O\left( \frac{1}{\eta \Delta} \log \frac{8 \lambda_1}{\Delta} \right) \), which is a small constant. Moreover, \( X_t \in S \) for all \( t \geq t_{\text{init},1} \). This property holds even without Assumption 4. 2. For all \( k \leq r \), \( t_k \) and \( t_{\text{init},k} \) are finite. In addition, \( T_{u_k} = O\left( \frac{4}{\eta \Delta} \log \frac{\Delta}{2 \sigma_1^2(u_k,t_{\text{init},k})} \right) \). 3. For all \( k \leq r \) and \( t \geq t_{\text{init},k} + T_{u_k} \), we have \( \sigma_1^2(u_k,t) \geq \frac{\Delta}{2} \). 4. For all \( k < r \) and \( t \geq t_k \), we have \( \sigma_1(u_k,t K_{k,t}^\top) \leq (1 - \eta \Delta/6)^{t-t_k} \) and \[ |p_{k,t}| \leq (2 \lambda_1 + \frac{24r}{\eta \Delta}) \cdot (1 - \eta \Delta/8)^{t-t_k}, \] where \( p_{k,t} = \lambda_k - \sigma_1^2(u_k,t) \). This demonstrates the incremental learning of GD. 5. For all \( t \geq t_R := t_{\text{init},r} + T_{u_r} + t^* + t^d \), we have \( X_t \in R \). 6. GD achieves \( \epsilon \)-accuracy, i.e., \( \| \Sigma_r - X_t X_t^\top \|_F \leq \epsilon \), after \( t_R + O\left( \frac{6}{\eta \Delta} \ln \frac{200r \lambda_1^3}{\eta \Delta^2 \epsilon} \right) \) iterations. 7. If Assumption 5 holds, then GD achieves \( \epsilon \)-accuracy in \( O(\log(d) + \log(1/\epsilon)) \) iterations. Let us discuss about the assumptions and conclusions. First, we assume \( \sigma_1(X_0) \leq \frac{1}{\sqrt{3\eta}} \). It holds with high probability when we use \( X_0 = \varpi N_0 \) with \( \varpi \sim \frac{1}{\sqrt{\eta}} \) and the same \( N_0 \) as before. This order \( \frac{1}{\sqrt{\eta}} \) is optimal from the above, because the GD sequence may simply diverge when \( \sigma_1(X_0) \) is too large. For instance, consider \( \Sigma = 0 \) and \( \eta \sigma_1^2(X_0) \geq 3 \). By an inductive argument and the GD iteration (9), we can show that \[ \sigma_1(X_{t+1}) \geq (\eta \sigma_1^2(X_t) - 1) \cdot \sigma_1(X_t) > 2 \sigma_1(X_t), \quad \forall t. \] This implies that GD diverges in this scenario and justifies that our condition for \( \varpi \) is rate optimal. The only possible improvement is a constant factor. In addition, we compare our requirements with (12) in the scenario of small initialization. Specifically, condition (12) decays to zero exponentially fast when \( d \) increases, while our condition is independent of \( d \). Next, we emphasize that Assumption 4 almost surely holds if we use random initialization. This follows from the theory of Lee et al. (2016) and the landscape analysis of Zhu et al. (2021). Zhu et al. (2021) show that problem (1) only has strict saddle points and all local minima are global ones. Lee et al. (2016) prove that GD almost surely avoids strict saddle points. Combining these two results, we know GD converges to the global minimum for problem (1). This implies Assumption 4 because if it does not hold for some \( k \), then \( \sigma_1(u_k,t) \) will converge to zero and the GD sequence will converge to a saddle point. This case almost never happens. This proves the following proposition. **Proposition 7** Suppose \( \eta \leq \frac{\Delta}{100\lambda_1} \). Then the following set \[ \text{failure set} := \{ X \in \mathbb{R}^{d \times r} \mid \sigma_1(X) \leq \frac{1}{\sqrt{3\eta}}, X_t \text{ is the GD sequence initialized with } X, \text{ and Assumption 4 does not hold for this sequence } X_t. \} \] has measure zero. Third, Assumption 5 is a more advanced assumption because it upper bounds the quantity $t_k^*$. We call it a transition assumption because it allows us to transit the analysis from the $k$-th row to the $(k + 1)$-th row and it assumes the transition time is $\mathcal{O}(\log(d))$. With this assumption, we could obtain the seventh property in Theorem 6, that is, the fast global convergence of GD. Nevertheless, it is challenging to verify this assumption. In Section 5.1.3, we will give more discussions. Fortunately, without Assumption 5, the first six properties in Theorem 6 still hold. These properties provide meaningful characterizations of the convergence of GD. Specifically, all quantities beyond $t_k^*$ are suitable upper bounded either by a constant or a logarithmic term. These bounds explain the fast convergence of GD in Figure 1 (to a certain degree). Moreover, the fourth property is noteworthy. It demonstrates that the $k$-th signal strength will converge linearly to the target value since the $t_k$-th step. This is independent of $t_j^*$ for all $j \geq k$. In other words, the $k$-th signal will converge fast to the target value independent of the behavior of the latter ($(k + 1)$-to-$r$-th) signals. This explains the incremental learning phenomenon exhibited by GD. 5 PROOF SKETCH In this section, we will provide a sketch of proof. We will start with rank-two matrix approximation and then extend it to general rank problems. The only difference between rank-two problem and general rank problems lies in how many rounds of inductive arguments are needed. 5.1 RANK-TWO MATRIX APPROXIMATION To start with, we first show that when $\sigma_1(X_0) \leq \frac{1}{\sqrt{3\eta}}$, the GD sequence will quickly enter the region $S$ defined in (13), and the sequence will remain in $S$ afterwards. This proves the first property in Theorem 6. Recall that $t_{\text{init},1} = \min\{t \geq 0 \mid X_t \in S\}$ and $X_t$ is the GD sequence given by (9). **Lemma 8** Suppose $\eta \leq \frac{1}{12\lambda_1}$ and $\sigma_1(X_0) \leq \frac{1}{\sqrt{3\eta}}$. Then $X_t \in S$ for all $t \geq t_{\text{init},1}$, where $$t_{\text{init},1} = \mathcal{O}\left(\frac{1}{\eta \lambda_1} \log \frac{1}{6\eta \lambda_1}\right) + \mathcal{O}\left(\frac{1}{\eta \Delta} \log \frac{8A_1}{\Delta}\right).$$ Lemma 8 demonstrates that $S$ is an absorbing set of GD, meaning that the sequence will remain in the set after its first entrance. This enables us to use the property $X_t \in S$ in subsequent analysis. 5.1.1 $\sigma_1^2(u_{1,t})$ INCREASES ABOVE $\Delta/2$ Our next step is to analyze the first row $u_{1,t}$ of $X_t$. This is in sharp contrast to the results in Section 3 and 4.1, where the first $r$ rows of $X_t$ are analyzed together. Although using large initialization makes previous analysis infeasible, it is still manageable to examine only the first row of $X_t$. In Lemma 9, we show that $\sigma_1^2(u_{1,t})$ increases fast above $\Delta/2$, and it remains larger than that afterwards. This proves the second and third properties in Theorem 6 for $k = 1$. In addition, this aligns with the first stage of the GD dynamics as displayed in Figure 1. **Lemma 9** Suppose $\eta \leq \frac{1}{12\lambda_1}$, $\sigma_1(X_0) \leq \frac{1}{\sqrt{3\eta}}$, and $\sigma_1(u_{1,t_{\text{init},1}}) > 0$. Then $\sigma_1^2(u_{1,t}) \geq \frac{\Delta}{2}$ for all $t \geq t_{\text{init},1} + T_{u_1}$, where $$T_{u_1} = \mathcal{O}\left(\frac{4}{\eta \Delta} \log \frac{\Delta}{2\sigma_1^2(u_{1,t_{\text{init},1}})}\right).$$ 5.1.2 SNR CONVERGES LINEARLY TO INFINITY AND $\sigma_1^2(u_{1,t})$ CONVERGES Once $\sigma_1^2(u_{1,t})$ exceeds $\frac{\Delta}{2}$, then by a technique similar to (7), we can show that the SNR $\frac{\sigma_1^2(u_{1,t})}{\sigma_1(u_{1,t}K_{1,t}^\top)}$ converges linearly to infinity, where $K_{1,t}$ is the 2-to-$d$-th rows of $X_t$. Since $\sigma_1^2(u_{1,t})$ belong to the interval $[\Delta/2, 2\lambda_1]$ by Lemma 8 and 9, we can show that the noise strength $\sigma_1(u_{1,t}K_{1,t}^\top)$ diminishes to zero fast. In particular, when $u_{1,t}K_{1,t}^\top = 0$, the dynamics of $u_{1,t}$ becomes $$u_{1,t+1} = u_{1,t} + \eta \lambda_1 u_{1,t} - \eta \sigma_1^2(u_{1,t}) u_{1,t}. $$ This update rule implies the fast convergence of $\sigma_1^2(u_{1,t}K_{1,t}^\top)$ to $\lambda_1$. Generally, when the term $u_{1,t}K_{1,t}^\top$ is close to zero, the dynamics of $u_{1,t}$ will mimic the above iteration. Following this, we can establish the fast convergence of $\sigma_1^2(u_{1,t})$ to $\lambda_1$. These results are established in Lemma 10. This relates to property 4 in Theorem 6, and elucidates the second stage of the GD dynamics as depicted in Figure 1. **Lemma 10** Suppose $\eta \leq \frac{\Delta}{100\lambda_1^2}$, $\sigma_1(X_0) \leq \frac{1}{\sqrt{3\eta}}$, and $\sigma_1(u_{1,0}) > 0$. Then for all $t \geq t_1$, we have $$\sigma_1(u_{1,t}K_{1,t}^\top) \leq (1 - \eta \Delta/6)^{t-t_1},$$ where $t_1 = t_{\text{init},1} + T_{u_1} + t^*$, $T_{u_1}$ is given in Lemma 9, and $t^*$ is a constant defined in (14). In addition, let $p_{1,t} = \lambda_1 - \sigma_1^2(u_{1,t})$ be the error term. Then for all $t \geq t_1$, we have $$|p_{1,t}| \leq (2\lambda_1 + 24\frac{24r}{\eta \Delta}) \cdot (1 - \eta \Delta/8)^{t-t_1}.$$ ### 5.1.3 Transition Assumption and Induction Lemma 10 shows that the magnitude $\sigma_1(u_{1,t}K_{1,t}^\top)$ diminishes linearly to zero. This motivates us to decouple the original matrix factorization problem into two sub-problems. For the first sub-problem, we study the convergence of the first row of $X_t$, which has been presented in previous section. In the second sub-problem, we examine $K_{1,t}$, the 2-to-$d$-th rows of $X_t$. Such decoupling is exact when $u_{1,t}K_{1,t}^\top = 0$; under this condition, the update rule of $K_{1,t}$ becomes $$K_{1,t} = K_{1,t-1} + \eta(\Gamma_1 - K_{1,t-1}K_{1,t-1}^\top)K_{1,t-1},$$ where $\Gamma_1 = \text{diag}(\lambda_2, \ldots, \lambda_d)$. This is congruent with the GD update rule of $X_t$ as in (9), and hence an inductive argument could be applied. Generally, when the noise term $\sigma_1(u_{1,t}K_{1,t}^\top)$ only decreases fast but does not reach zero, one should check whether $u_{1,t}K_{1,t}^\top$ is negligible (in the analysis of $u_{2,t}$). Specifically, if $\sigma_1(u_{2,t})$ is not always decreasing at the same speed as $\sigma_1(u_{1,t}K_{1,t}^\top)$, then we can apply the above inductive argument. To formulate this intuition, we introduce a variable $t_1^*$. It is defined as the smallest integer such that $$r(1 - \eta \Delta/6)^{t_1^*} \leq \sqrt{\frac{\Delta}{8}} \min\{\sigma_1(u_{2,t_1^*}), \sqrt{\frac{\Delta}{2}}\},$$ where $t_1$ is defined in Lemma 10. Recall that for all $t \geq t_1$, $\sigma_1(u_{1,t}K_{1,t}^\top) \leq (1 - \eta \Delta/6)^{t-t_1}$. Hence, (17) essentially compares the second signal strength $\sigma_1(u_{2,t})$ with an upper bound on the noise term $\sigma_1(u_{1,t}K_{1,t}^\top)$. It turns out that the noise term is negligible when (17) holds. In particular, a similar result as Lemma 9 can be established for the second signal $\sigma_1(u_{2,t})$, leading to Lemma 11. It is also related to the second and third property in Theorem 6 (for $k = 2$). **Lemma 11** Suppose conditions of Lemma 10 holds. Let $t_{\text{init},2} = t_1 + t_1^*$, where $t_1$ is given by Lemma 10 and $t_1^*$ is given by (17). Suppose $t_1^* < \infty$. Then $\sigma_1^2(u_{2,t}) \geq \frac{\Delta}{2}$ for all $t \geq t_{\text{init},2} + T_{u_2}$, where $$T_{u_2} = O\left(\frac{4}{\eta \Delta} \log \frac{\Delta}{2\sigma_1^2(u_{2,t_{\text{init},2}})}\right).$$ In Lemma 11, we assume $t_1^* < \infty$, which relates to Assumption 4. If we assume $t_1^* = O(\log(d))$ as in Assumption 5, then we can show that $T_{u_2} = O(\log(d))$ as well. While we have not theoretically characterized the quantity $t_1^*$, our theories are still insightful in the following sense. - First, the term $\sigma_1(u_{1,t}K_{1,t}^\top)$ is shown to decay to zero linearly fast while $\sigma_1^2(u_{2,t})$ does not seem to possess similar theories. Hence, we may expect that the time point $t_1^*$ is not large. - Second, $t_1^*$ characterizes the time when the GD sequence escapes from the saddle points. This time is inevitable for the GD sequence converging to the global minima. Even we do not provide an upper bound on $t_1^*$, we know the convergence behavior of GD during this time. Notably, during this time, both $\sigma_1(u_{1,t}K_{1,t}^\top)$ and $\sigma_1^2(u_{2,t})$ converge to zero fast. Any stationary point with $u_2 = 0$ is a saddle point. Hence, if the GD sequence $X_t$ converges with $t_1^* = \infty$, then it must converge to a saddle point. • Thirdly, during the time $t_1 - (t_1 + t^*_1)$, while $\sigma_1^2(u_{2,t})$ converges to zero fast, the first signal $\sigma_1^2(u_{1,t})$ still converges to $\lambda_1$, as shown in Lemma 10. This means the convergence of the first signal is not affected by the behaviors of the rest signals, which supports the incremental learning phenomenon – leading signals first converge even when the rest are stuck by saddle points. • Finally, the time $t_1$ to $t_1 + t^*_1$ aligns with the third stage of the GD dynamics as displayed in Figure 1. The experiment shows that the time $t^*_1$ is not too long. Despite these arguments, there is still a need to examine the duration $t^*_1$ in the future research, which might involve investigating specific initialization mechanisms. 5.1.4 Final Convergence Since $r = 2$, the analysis of previous three stages implies that both the first signal strengths are larger than $\Delta/2$, and the related noise components are geometrically decaying. A simple verification shows that the GD sequence $X_t$ will quickly enter the region $R$, which is defined in (10). Then by the local linear convergence of GD in Theorem 2, we shall complete the characterization of the GD sequence’s convergence to the global minima. This final stage aligns with the fourth stage of the GD dynamics as illustrated in Figure 1. 5.2 General Rank Matrix Approximation It is direct to extend rank-two matrix approximation to general rank case. The key point is to repeat the inductive arguments for $(r-1)$ rather than one times. Similar to the rank-two case, we will now successively show that $\sigma_1^2(u_{k,t})$ surpasses $\Delta/2$ and $\sigma_1(u_{k,t}K_{k,t}^\top)$ diminishes linearly to zero for all $k \leq r$. Moreover, we will show that $\sigma_1^2(u_{k,t})$ converges to $\lambda_k$ after certain iterations. Once the first $r$ rows of $X_t$ are all analyzed, we can show that the sequence $X_t$ quickly enters the region $R$ defined in (10). By invoking the local linear convergence theorem, we will conclude the proof. Our analysis consistently uses the SNR argument, where the choices of SNRs vary across different contexts. Specifically, • When we analyze the $k$-th signal strength in Theorem 6, we will analyze the SNR $\frac{\sigma_1^2(u_{k,t})}{\sigma_1(u_{k,t}K_{k,t}^\top)}$. This will prove both the diminishing of $\sigma_1(u_{k,t}K_{k,t}^\top)$ and the convergence of $\sigma_1^2(u_{k,t})$ to $\lambda_k$. • When we analyze the local linear convergence in Theorem 2, we will take the SNR as $\frac{\sigma_r^2(U_t)}{\sigma_1^2(J_t)}$, where $U, J$ are defined in Section 3. Such analysis will prove the linear convergence of $J$ to zero. 6 Concluding Remarks This paper presents a comprehensive analysis of the trajectory of GD in addressing matrix factorization issues, emphasizing particularly on instances with large initialization. The analysis employs both a SNR argument and an induction argument to bolster the investigation’s depth and insight. Our finding is that even with large initialization, GD may still exhibit an incremental learning phenomenon. Also, the main challenging convergence issue is to escape from the saddle points. We hope our findings can inspire other researchers in related fields. There are several limitations within this paper, bringing future research opportunities. • First, we do not upper bound the time $t^*_k$ defined in (14). Hence, it is of interest to give an effective upper bound. Also, one may examine this point to see if negative results can be established. • Second, our paper requires a strictly decreasing top eigenvalues. Extending to more general matrices may need additional studies. • Third, our analysis focuses on the simplest matrix factorization setting. It is intriguing to study similar results in other settings, such as matrix sensing, where $\Sigma$ is only accessible via linear measurements. Our delicate dynamic analysis is sensitive to the noise introduced by the measurement mechanism. Hence, new theoretical tools are needed. • Fourth, it is interesting to examine GD in solving deep matrix factorization. It is unknown how large initialization affects the GD trajectory in that case. REFERENCES Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factorization. *Advances in Neural Information Processing Systems*, 32, 2019. Yuxin Chen, Yuejie Chi, Jianqing Fan, and Cong Ma. Gradient descent with random initialization: Fast global convergence for nonconvex phase retrieval. *Mathematical Programming*, 176:5–37, 2019. Yuejie Chi, Yue M Lu, and Yuxin Chen. Nonconvex optimization meets low-rank matrix factorization: An overview. *IEEE Transactions on Signal Processing*, 67(20):5239–5269, 2019. Daniel Gissin, Shai Shalev-Shwartz, and Amit Daniely. The implicit bias of depth: How incremental learning drives generalization. *arXiv preprint arXiv:1909.12051*, 2019. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In *Proceedings of the thirteenth international conference on artificial intelligence and statistics*, pp. 249–256. JMLR Workshop and Conference Proceedings, 2010. Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Implicit regularization in matrix factorization. *Advances in neural information processing systems*, 30, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. Daniel Jakubovitz, Raja Giryes, and Miguel RD Rodrigues. Generalization error in deep learning. In *Compressed Sensing and Its Applications: Third International MATHEON Conference 2017*, pp. 153–193. Springer, 2019. Ziwei Ji and Matus Telgarsky. Gradient descent aligns the layers of deep linear networks. *arXiv preprint arXiv:1810.02032*, 2018. Jikai Jin, Zhiyuan Li, Kaifeng Lyu, Simon Shaolei Du, and Jason D Lee. Understanding incremental learning of gradient descent: A fine-grained analysis of matrix sensing. In *International Conference on Machine Learning*, pp. 15200–15238. PMLR, 2023. Yann LeCun, Léon Bottou, Genevieve B Orr, and Klaus-Robert Müller. Efficient backprop. In *Neural networks: Tricks of the trade*, pp. 9–50. Springer, 2002. Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent only converges to minimizers. In *Conference on learning theory*, pp. 1246–1257. PMLR, 2016. Jason D Lee, Ioannis Panageas, Georgios Piliouras, Max Simchowitz, Michael I Jordan, and Benjamin Recht. First-order methods almost always avoid strict saddle points. *Mathematical programming*, 176:311–337, 2019. Zhiyuan Li, Yuping Luo, and Kaifeng Lyu. Towards resolving the implicit bias of gradient descent for matrix factorization: Greedy low-rank learning. *arXiv preprint arXiv:2012.09839*, 2020. Cong Ma, Kaizheng Wang, Yuejie Chi, and Yuxin Chen. Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval and matrix completion. In *International Conference on Machine Learning*, pp. 3345–3354. PMLR, 2018. Jianhao Ma, Lingjun Guo, and Salar Fattahi. Behind the scenes of gradient descent: A trajectory analysis via basis function decomposition. *arXiv preprint arXiv:2210.00346*, 2022. Mahdi Soltanolkotabi, Dominik Stöger, and Changzhi Xie. Implicit balancing and regularization: Generalization and convergence guarantees for overparameterized asymmetric matrix sensing. *arXiv preprint arXiv:2303.14244*, 2023. Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. *The Journal of Machine Learning Research*, 19(1):2822–2878, 2018.
3VD4PNEt5q
Given the pivotal role of fusion models in autonomous vehicle systems, the proposed adversarial approach inevitably raises safety and security concerns. How might these attacks compromise the integrity and reliability of autonomous driving systems? A comprehensive discussion on this would be crucial for stakeholders in the autonomous vehicle domain.
FUSION IS NOT ENOUGH: SINGLE MODAL ATTACKS ON FUSION MODELS FOR 3D OBJECT DETECTION Zhiyuan Cheng¹ Hongjun Choi² Shiwei Feng¹ James Liang³ Guanhong Tao¹ Dongfang Liu³ Michael Zuzak³ Xiangyu Zhang¹ ¹Purdue University {cheng443, feng292, taog, xyzhang}@purdue.edu ²DGIST hongjun@dgist.ac.kr ³Rochester Institute of Technology {jcl3689, dongfang.liu, mjzeec}@rit.edu ABSTRACT Multi-sensor fusion (MSF) is widely used in autonomous vehicles (AVs) for perception, particularly for 3D object detection with camera and LiDAR sensors. The purpose of fusion is to capitalize on the advantages of each modality while minimizing its weaknesses. Advanced deep neural network (DNN)-based fusion techniques have demonstrated the exceptional and industry-leading performance. Due to the redundant information in multiple modalities, MSF is also recognized as a general defence strategy against adversarial attacks. In this paper, we attack fusion models from the camera modality that is considered to be of lesser importance in fusion but is more affordable for attackers. We argue that the weakest link of fusion models depends on their most vulnerable modality, and propose an attack framework that targets advanced camera-LiDAR fusion-based 3D object detection models through camera-only adversarial attacks. Our approach employs a two-stage optimization-based strategy that first thoroughly evaluates vulnerable image areas under adversarial attacks, and then applies dedicated attack strategies for different fusion models to generate deployable patches. The evaluations with six advanced camera-LiDAR fusion models and one camera-only model indicate that our attacks successfully compromise all of them. Our approach can either decrease the mean average precision (mAP) of detection performance from 0.824 to 0.353, or degrade the detection score of a target object from 0.728 to 0.156, demonstrating the efficacy of our proposed attack framework. Code is available. 1 INTRODUCTION 3D object detection is a critical task in the perception of autonomous vehicles (AVs). In this task, AVs employ camera and/or LiDAR sensors input to predict the location, size, and categories of surrounding objects. Camera-LiDAR fusion models, which combine the high-resolution 2D texture information from camera images with the rich 3D distance information from LiDAR point clouds, have outperformed the detection accuracy of models that rely solely on cameras or LiDAR. (Yang et al., 2022; Liu et al., 2023b; Li et al., 2022b). Additionally, multi-sensor fusion (MSF) techniques are generally recognized as a defensive measure against attacks (Cao et al., 2021; Liang et al., 2022), as the extra modality provides supplementary information to validate detection results. Viewed in this light, a counter-intuitive yet innovative question arises: Can we attack fusion models through a single modality, even the less significant one, thereby directly challenging the security assumption of MSF? Yet, this fundamental question has not been sufficiently answered in the literature. Previous research has demonstrated successful attacks against camera-LiDAR fusion models by targeting either multiple modalities (Cao et al., 2021; Tu et al., 2021) or the LiDAR modality alone (Hallyburton et al., 2022). However, these approaches are not easy to implement and require additional equipment such as photodiodes, laser diodes (Hallyburton et al., 2022), and industrial-grade 3D printers (Cao et al., 2021; Tu et al., 2021) to manipulate LiDAR data, thus increasing the deployment cost for attackers. Consequently, we explore the possibility of attacking fusion models via the camera modality, as attackers can more easily perturb captured images using affordable adversarial patches. Nevertheless, this attack design presents additional challenges. For example, the camera modality is considered less significant in fusion models for 3D object detection since LiDAR provides abundant 3D information. The performance of both state-of-the-art LiDAR-based models and ablations of fusion models using only LiDAR surpasses their solely camera-based counterparts. Figure 1: Single-modal attacks against camera-LiDAR fusion model using camera-modality. significantly (Liang et al., 2022; Liu et al., 2023b; Motional, 2023) (see more experimental results in Appendix A). The less significance of camera modality in fusion can limit its impact on detection results. Moreover, different fusion models can exhibit distinct vulnerabilities in the camera modality, necessitating varying attack strategies. The cutting-edge adversarial patch optimization technique against camera-only models (Cheng et al., 2022) has limitations in generating deployable patches viewing the entire scene, as they fail to consider the semantics of the input. Hence, a problem remains open: How to design single-modal attack to effectively subvert fusion models? In response to 1 and 2, we propose a novel attack framework against camera-LiDAR fusion models through the less significant camera modality. We utilize adversarial patches as the attack vector, aiming to cause false negative detection results, and our main focus lies on the early-fusion scheme, including data-level and feature-level fusion strategies. As shown in Figure 1, our attack employs a two-stage approach to generate an optimal adversarial patch for the target fusion model. In the first stage (2nd column), we identify vulnerable regions in the image input using our novel sensitivity distribution recognition algorithm. The algorithm employs an optimizable mask to identify the sensitivity of different image areas under adversarial attacks. Based on the identified vulnerable regions, we then classify the fusion model as either object-sensitive or globally sensitive, enabling tailored attack strategies for each type of model. In the second stage (3rd column), we design two attack strategies for different types of models to maximize attack performance. For globally sensitive models, we devise scene-oriented attacks, wherein adversarial patches can be placed on static background structures (e.g., roads or walls) to compromise the detection of arbitrary nearby objects (see undetected pedestrians in the red circle of Figure 1). For object-sensitive models, we implement object-oriented attacks that can compromise the detection of a target object by attaching the patch to it (see the undetected vehicle in the red circle of Figure 1). Compared to Cheng et al. (2022), the patches generated by our proposed framework offer a significant advantage by being both physically deployable and effective (see comparison in Appendix J). Our contributions are: • We present single-modal attacks against advanced camera-LiDAR fusion models leveraging only the camera modality, thereby further exposing the security issues of MSF-based AV perception. • We develop an algorithm for identifying the distribution of vulnerable regions in images, offering a comprehensive assessment of areas susceptible to adversarial attacks. • We introduce a framework for attacking fusion models with adversarial patches, which is a two-stage approach and involves different attack strategies based on the recognized sensitivity type of the target model. The threat model is detailed in Appendix P. • We evaluate our attack using six state-of-the-art fusion-based and one camera-only models on Nuscenes (Caesar et al., 2020), a real-world dataset collected from industrial-grade AV sensor arrays. Results show that our attack framework successfully compromises all models. Object-oriented attacks are effective on all models, reducing the detection score of a target object from 0.728 to 0.156 on average. Scene-oriented attacks are effective for two globally sensitive models, decreasing the mean average precision (mAP) of detection performance from 0.824 to 0.353. Experiments in simulation and physical-world also validate the practicality of our attacks in the real world. Demo video is available at https://youtu.be/xhXtzDezeaM 2 RELATED WORK Camera-LiDAR Fusion. AVs are typically equipped with multiple surrounding cameras, providing a comprehensive view, and LiDAR sensors are usually mounted centrally on top of the vehicle, enabling a 360-degree scan of the surrounding environment, resulting in a 3D point cloud. Images and point clouds represent distinct modalities, and numerous prior works have investigated methods to effectively fuse them for improved object detection performance. Specifically, the fusion strategies can be categorized into three types based on the stage of fusion: 1) data-level fusion, which leverages the extracted features from one modality to augment the input of the other modality (Yin et al., 2021; Vora et al., 2020; Wang et al., 2021); 2) decision-level fusion, which conducts independent perception for each modality and subsequently fuses the semantic outputs (BaiduApollo); and 3) feature-level fusion, which combines low-level machine-learned features from each modality to yield unified detection results (Liu et al., 2023b; Liang et al., 2023; Yang et al., 2022; Li et al., 2022b; Bai et al., 2022; Chen et al., 2022b). Feature-level fusion can be further divided into alignment-based and non-alignment-based fusion. Alignment-based fusion entails aligning camera and LiDAR features through dimension projection at the point level (Li et al., 2020; Vora et al., 2020; Chen et al., 2022a), the voxel level (Li et al., 2022b; Jiao et al., 2022), the proposal level (Chen et al., 2017; Ku et al., 2018), or the bird’s eye view (Liu et al., 2023b; Liang et al., 2022) before concatenation. For non-alignment-based fusion, cross-attention mechanisms in the transformer architecture are employed for combining different modalities (Yang et al., 2022; Bai et al., 2022). Contemporary fusion models primarily use feature-level fusion for its superior feature extraction capability and performance. Hence, we focus on introducing and analyzing this type of fusion strategy in our method design. It is worth noting that our approach can also be directly applied to data-level fusion, as demonstrated in our evaluation (see Section 5). More discussion of fusion strategies is in Appendix B. Appendix C introduces the general architecture of camera-LiDAR fusion. **3D Object Detection Attacks.** 3D object detection models (Cheng et al., 2022; Liu et al., 2021a; Cui et al., 2021) can be classified into three categories: camera-based, LiDAR-based, and fusion-based models. Attacks targeting each category have been proposed in the context of AV systems. 1) For camera-based models, adversaries typically employ adversarial textures to manipulate the pixels captured by AV cameras (Zhang et al., 2021; Boloor et al., 2020). This approach is cost-effective and can be easily implemented through printing and pasting an adversarial patch. Recent studies have concentrated on enhancing the stealthiness of the adversarial patterns (Cheng et al., 2022; Duan et al., 2020). 2) In the case of LiDAR-based models, some attackers utilize auxiliary equipment, such as photodiodes and laser diodes, to intercept and relay the laser beams emitted by AV LiDAR systems, thereby generating malicious points in the acquired point cloud to launch the attack (Cao et al., 2019, 2023; Sun et al., 2020). Alternatively, others employ malicious physical objects with engineered shapes to introduce adversarial points in attacks (Tu et al., 2020; Abdelfattah et al., 2021a; Cao et al., 2019). 3) Regarding camera-LiDAR fusion models, multi-modal attacks have been developed that perturb both camera and LiDAR input either separately (Tu et al., 2021; Abdelfattah et al., 2021b) or concurrently (Cao et al., 2021), using the previously mentioned attack vectors. Additionally, single-modal attacks on solely LiDAR input have been conducted in a black-box manner (Hallyburton et al., 2022) to fool fusion models. For camera-oriented single modal attacks, there are prior works investigating the robustness of fusion models when subjected to noisy camera input (Park et al., 2021; Kim & Ghosh, 2019). However, Kim & Ghosh (2019) mainly considered random noise, specifically Gaussian noise, instead of physical-world adversarial attacks. Park et al. (2021) mainly focused on digital-space attacks and exclusively targeted an early model using single-view images. Differently, our study considers physically practical attacks and investigates fusion models utilizing multi-view images and a transformer-based detection head. **3 MOTIVATION** Despite the challenges mentioned in Section 1, it is still theoretically possible to conduct camera-only attack against fusion models. The intuition behind is that adversarial effects from the camera modality can propagate through model layers, contaminate the fused features, and ultimately impact the model output (See Appendix D for detailed feasibility analysis). To examine the actual performance of camera-only adversarial attacks on SOTA fusion models, we illustrate an example in Figure 2. A frame is derived from the Nuscenes dataset containing both camera and LiDAR data (the first row). It represents a scenario where the ego-vehicle is navigating a road populated with multiple cars and pedestrians. ![Figure 2: Motivating example of adversarial patch attack on images against fusion models.](image-url) nign cases, two cutting-edge fusion models, DeepInteraction (Yang et al., 2022) and BEVFusion-PKU (Liang et al., 2022), can accurately detect objects in the given scene. We then implement a conventional patch attack (Brown et al., 2017) by generating a patch on the road to induce false negative detections. The performance of DeepInteraction is undisturbed by the attack, illustrated in the second row of Figure 2. In contrast, BEVFusion-PKU is successfully disrupted, evidenced by its inability to detect objects proximal to the patch, highlighted by red circles in the third row. This discrepancy in the models’ responses confirms that exploiting the camera modality can impact fusion models while highlights that uniform attack strategies may not be universally effective due to the inherent unique vulnerabilities in different models, such as varying susceptible regions. Despite the SOTA patch attack can be adapted to optimize the patch region over the scene, the generated patch is not deployable (see Appendix I), limiting its application. To characterize the susceptible regions, we introduce the concept of “sensitivity” as a property of areas in input images. It measures the degree to which specific area of an image impacts adversarial goals. An area with high sensitivity means perturbations there have large influence and can achieve good attack performance. Hence, sensitive regions are more vulnerable to adversarial attacks than other regions. Formally, the sensitivity $S_A$ of an area $A$ is defined as $$S_A \propto \max_p \{ L_{adv}(x, l) - L_{adv}(x', l) \},$$ where $x' = x \ominus (1 - A) + p \ominus A$. Here, $x$ is the input image, $l$ is the LiDAR point cloud and $x'$ is the adversarial image with perturbations $p$ in region $A$. $L_{adv}$ denotes the adversarial loss defined by adversarial goals. Examining the sensitivity of each area on the image through individual patch optimization is very time consuming and it becomes unaffordable as the granularity of the considered unit area increases. Despite the availability of various interpretation methods for model decisions (e.g., GradCAM (Selvaraju et al., 2017) and ScoreCAM (Wang et al., 2020)), which can generate heatmaps to highlight areas of attention in images, it is essential to distinguish between interpreting model decisions and recognizing sensitivity. For instance, our motivating example presents that the road is a susceptible region for adversarial attacks on BEVFusion-PKU. However, the main focus of object detection should be directed towards the objects themselves rather than the road, as an interpretation method would show (Gildenblat, 2022). Therefore, to recognize the sensitivity distribution on input images efficiently, we propose a novel optimization-based method in the first stage, and design different attack strategies in the second stage to maximize attack performance. 4 METHOD Overview. Figure 3 presents the framework of our single-modal adversarial attack on fusion models using an adversarial patch, employing a two-stage approach. Initially, we identify the sensitivity distribution of the subject network, and subsequently, we launch an attack based on the identified sensitivity type. During the first stage, to recognize the sensitivity distribution, we define perturbations and perturbation masks with dimensions identical to the multi-view image input. We then compose the adversarial input by applying the patch and mask to images of a scene sampled from the dataset (step ①). After feeding the adversarial input images and corresponding benign LiDAR data to the subject fusion model, we obtain object detection results (step ②). We calculate the adversarial loss based on the detection scores of objects in the input scene (step ③) and utilize back-propagation and gradient descent to update masks and perturbations, aiming to minimize adversarial loss and mask loss (step ④). We repeat this process for thousands of iterations until convergence is achieved, and then visualize the final mask as a heatmap to determine the sensitivity type (step 5). The heatmap’s high-brightness regions signify areas more susceptible to adversarial attacks. Based on the distribution of sensitive areas, we classify the fusion model into two types: global sensitivity and object sensitivity. Global sensitivity refers to the distribution of sensitive areas covering the entire scene, including objects and non-object background. Object sensitivity, on the other hand, indicates that only object areas are sensitive to attacks. In the second stage, we adopt different attack strategies based on the identified sensitivity heatmap type. For global sensitivity, we implement scene-oriented attacks. By placing a patch on the static background (e.g., the road), we deceive the fusion model and compromise the detection of arbitrary objects surrounding the patch. For both object sensitivity and global sensitivity, we can employ object-oriented attacks. In this approach, we attach a patch to a target object, causing failure in detecting it while leaving the detection of other objects unaltered. Since adversarial patches, optimized as 2D images, would be deployed physically during attacks, we employ projections (Cheng et al., 2023) to simulate how the patch would look on the scene image once it is physically deployed (refer to Figure 4), which enhances the physical-world robustness. The two attack strategies differ mainly in their projection functions and the scope of affected objects. Details are discussed later. Sensitivity Distribution Recognition. We leverage the gradients of images with respect to the adversarial loss as an overall indicator to understand the sensitivity of different image areas, since they are closely related to the relative weights assigned to the camera modality. (See detailed analysis in Appendix E.) In a formal setting, the proposed sensitivity distribution recognition algorithm can be articulated as an optimization problem. The primary objective is to concurrently minimize an adversarial loss and a mask loss, which can be mathematically represented as follows: \[ \arg \min_{p,m} L_{adv} + \lambda L_{mask}, \quad \text{s.t. } p \in [0,1]^{3 \times h \times w}, m \in R^{1 \times \lfloor h/s \rfloor \times \lfloor w/s \rfloor}, \] where \(L_{adv} = MSE(f_{scores}(x', l), 0)\); \(L_{mask} = MSE(M, 0)\); \[ x' = x \odot (1 - M) + p \odot M; \quad M[i,j] = \frac{1}{2} \times \tanh(\gamma \cdot m[\lfloor i/s \rfloor, \lfloor j/s \rfloor]) + \frac{1}{2}. \] Here, \(x\) is the image input, which is normalized and characterized by dimensions \(h\) (height) and \(w\) (width). The symbols \(l, p, m,\) and \(\lambda\) represent the LiDAR input, the perturbations on image with dimensions equal to \(x\), the initial mask parameters, and the mask loss weight hyperparameter, respectively. The desired sensitivity heatmap corresponds to the perturbation mask \(M\). Visualization of variables can be found in Figure 3. Initially, the mask parameters \(m \in R^{1 \times \lfloor h/s \rfloor \times \lfloor w/s \rfloor}\) are transformed into the perturbation mask \(M \in [0,1]^{1 \times h \times w}\) using Equation 3. We use \(\tanh()\) function to map values in \(m\) into the \([0,1]\) range, and its long-tail effect encourages the mask \(M\) values to gravitate towards either 0 or 1. The hyperparameters \(\gamma\) and \(s\) modulate the convergence speed and heatmap granularity, respectively. Subsequently, the perturbation mask \(M\) is utilized to apply the perturbation \(p\) to the input image \(x\), resulting in the adversarial image \(x'\). \(\odot\) denotes element-wise multiplication. Adversarial image \(x'\) and benign LiDAR data \(l\) are then fed to the fusion model \(f_{scores}\). Since our attack goals are inducing false negative detection results, one objective of our optimization is to minimize the detected object scores. Hence, we use the mean square error (MSE) between the scores and zero as the adversarial loss \(L_{adv}\). In this context, the output of \(f_{scores}\) consists of the detected object scores (confidence). The optimization’s secondary objective is to minimize the perturbation mask values, achieved by incorporating a mask loss \(L_{mask}\). The optimization of these two losses is a dual process. Minimizing the adversarial loss (i.e., maximizing attack performance) necessitates a higher magnitude of perturbations on the input. Conversely, minimizing the mask loss indicates a lower magnitude of perturbations. As a result, the dual optimization process converges on applying higher magnitude perturbations on sensitive areas (to improve attack performance) and lower magnitudes for insensitive parts (to minimize mask loss). Hence, the mask \(M\) serves as a good representation of the sensitivity distribution, and visualizing \(M\) allows for the attainment of the sensitivity heatmap. Then we can further classify the fusion model into object sensitivity or global sensitivity by comparing the expectation of the average intensity of object areas with non-object background in each scene as follows: \[ T(f) = \begin{cases} \text{Object}, & \mathbb{E}_x \left[ \frac{\sum(M^o \odot A^o_x)}{\sum A^o_x} \right] > \beta \mathbb{E}_x \left[ \frac{\sum(M^o \odot (1-A^o_x))}{\sum(1-A^o_x)} \right] \\ \text{Global}, & \text{otherwise} \end{cases}. \] Here, \(T(f)\) represents the sensitivity type of fusion model \(f\), and \(A^o_x\) is a mask with values 1 for object areas and 0 for non-object areas in scene image \(x\). The mask \(A^o_x\) is generated by considering the pixels covered by bounding boxes of detected objects in benign cases. $M^x$ refers to the recognized sensitivity heatmap of $x$. $\beta$ is the classification threshold and set to 3 in our experiments. **Attack Strategies.** Two attack strategies, namely scene-oriented attacks and object-oriented attacks, are introduced based on the fusion model’s sensitivity type. Both strategies employ optimization-based patch generation methods. Back-propagation and gradient descent are utilized iteratively to solve the optimization problem. Formally, the problem is defined as: $$\arg \min_p E_{(x,l) \sim D} [MSE(f_s(x', l), 0)], \quad \text{s.t. } p \in [0, 1]^{3 \times h \times w}, M \in \{0, 1\}^{1 \times h \times w},$$ where $x' = x \odot (1 - M_x) + p_x \odot M_x; \quad M_x = proj_x(M); \quad p_x = proj_x(p).$ Here, scene images $x$ and LiDAR data $l$ are randomly sampled from the training set $D$. The mask $M$ represents a patch area for cropping the patch image, with values equal to 1 inside a predefined patch area and 0 elsewhere. Unlike Equation 1, $M$ contains discrete values and is not optimizable. $proj_x()$ signifies the projection of the original patch image (see the “2D patch image” in Figure 4a) onto a specific area of the scene image $x$ (see the “captured image” in Figure 4a), to simulate how the patch would look once it’s physically deployed, which minimizes the disparity between digital space and the physical world. The target region is contingent upon the attack strategy. Similarly, the output of the fusion model $f_s$ consists of detected object scores, which vary in scope depending on specific attack strategies. We minimize the MSE between detected object score(s) and zero to achieve false negative detection results, and we leverage the Expectation of Transformation (EoT) (Athalye et al., 2018) across all training samples and color jitters (i.e., brightness, contrast and saturation changes) in the patch to enhance the physical robustness and generality of our attack. The adversarial pattern can be concealed within natural textures (e.g., dirt or rust), utilizing existing camouflage techniques (Duan et al., 2020) to remain stealthy and persistent, avoiding removal. Specifically, for **scene-oriented attacks**, the goal is to compromise the detection of arbitrary objects near an adversarial patch attached to static structures (e.g., the road) of a target scene. In this scenario, the training set $D$ is composed of the target scene in which the ego-vehicle is stationary (e.g., stop at an intersection or a parking lot). The categories and locations of objects surrounding the ego-vehicle in the scene can change dynamically. The output of the fusion model $f_s$ during optimization is the detection score of all detected objects in the target scene. To simulate the appearance of the patch on the scene image once deployed, $proj_x$ first projects pixels of the patch image and mask into 3D space on the road (step ① in Figure 4a). Then the function maps them back onto the scene image (step ②). The patch’s 3D position is predefined with distance $d$ and viewing angle $\alpha$ by the attacker. The victim vehicle’s camera height $g$ above ground can be known from the dataset, which ensures by definition that the patch is on the road. This process can be expressed formally with Equation 7 and 9, where $(u^p, v^p)$ is a pixel’s coordinates on the patch image $p$, $(x^p, y^p, z^p)$ the 3D coordinates of the pixel on the ground in the camera’s coordinate system, $(u^s, v^s)$ the corresponding pixel on the scene image, $K$ the camera intrinsic parameters. Other variables are defined in Figure 4. $$\begin{bmatrix} x^p \\ y^p \\ z^p \\ 1 \end{bmatrix} = \begin{bmatrix} \cos \alpha & 0 & -\sin \alpha & q \\ 0 & 1 & 0 & 0 \\ \sin \alpha & 0 & \cos \alpha & d \\ 0 & 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} W/w & 0 & -W/2 \\ 0 & 0 & g \\ 0 & -H/h & H/2 \\ 0 & 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} u^p \\ v^p \\ 1 \end{bmatrix},$$ $$\begin{bmatrix} x^p \\ y^p \\ z^p \\ 1 \end{bmatrix} = \begin{bmatrix} \cos \alpha & 0 & -\sin \alpha & q \\ 0 & 1 & 0 & 0 \\ \sin \alpha & 0 & \cos \alpha & d \\ 0 & 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} W/w & 0 & -W/2 \\ 0 & H/h & -H/2 \\ 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} u^p \\ v^p \\ 1 \end{bmatrix},$$ $$[u^s \ v^s \ 1]^T = 1/z^p \cdot K \cdot [x^p \ y^p \ z^p \ 1]^T.$$ For object-oriented attacks, the goal is to compromise the detection of the target object with an attached adversarial patch while keeping other objects unaffected. In this case, the training set $D$ is composed of frames in which the target object appears. For example, the ego-vehicle may follow a target vehicle with the background changes dynamically in the scene. The output of the fusion model $f_s$ during optimization is the detection score of the target object exclusively. The function $\text{proj}_L$ projects the patch image and mask onto the target object in the scene image using Equation 8 and 9 corresponding to step ① and ② in Figure 4b respectively. Unlike the scene-oriented attack, in which the location of the patch is defined by attackers using longitudinal distance $d$, lateral distances $q$ and viewing angle $\alpha$, in object-oriented attacks, these projection parameters change dynamically depending on the position of the target object in training data. Hence, we innovatively extract them from the estimated 3D bounding box of the target object before projecting the patch. 5 EVALUATION Model selection In our evaluation, we use six state-of-the-art camera-LiDAR fusion-based 3D object detection models that are published recently. These models include Transfusion (Bai et al., 2022), DeepInteraction (Yang et al., 2022), UVTR (Li et al., 2022b), PointAugmenting (Wang et al., 2021), BEVFusion-MIT (Liu et al., 2023b) and BEVFusion-PKU (Liang et al., 2022). These models cover data-level and feature-level fusion strategies and contain a diverse range of feature-level fusion approaches, including alignment-based fusion, non-alignment-based fusion, and various detection head designs. Additionally, we use a camera-only model called BEVFormer (Li et al., 2022c) as comparison. Detailed selection criteria can be found in Appendix F. Scene selection. Our evaluation scenes are selected from the Nuscenes dataset (Caesar et al., 2020). This dataset contains real-world multi-view images and point cloud data collected from industrial-grade sensor array, and they are derived from hundreds of driving clips. The selected scenes for testing in our evaluation contains 375 data frames, encompass diverse road types, surrounding objects and time-of-day situations. Additionally, we conduct experiments in simulation and in the physical world. By leveraging this rich dataset along with simulation and physical-world experiments, our evaluation framework benefits from an accurate representation of real-world driving scenarios. 5.1 Sensitivity Distribution Recognition This section reports on the evaluation of our sensitivity distribution recognition method. We present the qualitative results of the sensitivity heatmap generated by our method and validate the property of the heatmap in Appendix C. We utilize Equation 1 to generate the sensitivity heatmap for the six fusion models, using two different data frames, each with varying proportions of vehicles and pedestrians. Detailed experimental setups can be found in Appendix H. Figure 5 depicts the generated sensitivity heatmaps. The first two images are the scene images captured by the front camera of the ego vehicle while the subsequent rows exhibit the sensitivity distributions of the corresponding scene image using different models. The brightness or warmth of colors in the heatmap corresponds to the sensitivity of a particular region to adversarial attacks. Higher brightness areas signify higher susceptibility to attacks, while lower brightness denotes more robustness. Observe that the sensitive regions for the initial four models, namely Transfusion (Bai et al., 2022), DeepInteraction (Yang et al., 2022), UVTR (Li et al., 2022b) and PointAugmenting (Wang et al., 2021), primarily lie on --- 1The code is available at: https://github.com/Bob-cheng/CL-FusionAttack Table 1: Attack performance of the scene-oriented adversarial patch attack against 3D object detection. | Models | mAP | CR | TK | BS | TR | BR | PD | BI | |------------|-----|-----|-----|-----|-----|-----|-----|-----| | BF-PKU | Ben.| 0.824| 0.453| 0.448| 1.000| 0.991| 0.898| 0.990| 0.989| | | Adv.| 0.333| 0.136| 0.116| 0.524| 0.239| 0.611| 0.242| 0.604| | | Diff.| 57.2%| 70.0%| 74.1%| 47.6%| 75.6%| 32.0%| 75.6%| 38.9%| | BF-MIT | Ben.| 0.886| 0.538| 0.939| 0.858| 0.992| 0.895| 0.989| 0.990| | | Adv.| 0.553| 0.279| 0.652| 0.720| 0.488| 0.623| 0.337| 0.772| | | Diff.| 37.6%| 48.1%| 30.6%| 16.1%| 50.8%| 30.4%| 65.9%| 22.0%| | TF | Ben.| 0.758| 0.493| 0.451| 0.700| 0.991| 0.692| 0.989| 0.990| | | Adv.| 0.759| 0.494| 0.452| 0.706| 0.992| 0.693| 0.989| 0.989| | | Diff.| 0.1%| 0.2%| 0.2%| 0.9%| 0.1%| 0.1%| 0.0%| 0.1%| | DI | Ben.| 0.807| 0.459| 0.522| 0.947| 0.990| 0.750| 0.989| 0.989| | | Adv.| 0.808| 0.460| 0.529| 0.947| 0.990| 0.751| 0.989| 0.989| | | Diff.| 0.1%| 0.2%| 1.3%| 0.0%| 0.0%| 0.1%| 0.0%| 0.0%| | UVTR | Ben.| 0.850| 0.557| 0.989| 0.754| 0.990| 0.736| 0.982| 0.989| | | Adv.| 0.862| 0.558| 0.989| 0.786| 0.990| 0.741| 0.982| 0.989| | | Diff.| 1.4%| 0.2%| 0.0%| 4.8%| 0.0%| 2.6%| 0.7%| 0.0%| | PointAug | Ben.| 0.724| 0.471| 0.466| 0.683| 0.992| 0.714| 0.984| 0.981| | | Adv.| 0.716| 0.467| 0.468| 0.679| 0.988| 0.705| 0.984| 0.981| | | Diff.| 1.1%| 0.8%| 0.4%| 0.6%| 0.4%| 1.3%| 0.0%| 0.0%| | BFM | Ben.| 0.519| 0.417| 0.811| 0.280| 0.247| 0.712| 0.650| 0.518| | | Adv.| 0.514| 0.432| 0.799| 0.284| 0.247| 0.711| 0.605| 0.518| | | Diff.| 1.1%| 3.6%| 1.5%| 1.4%| 0.0%| 0.1%| 6.9%| 0.0%| * CR: Car, TK: Truck, BS: Bus, TR: Trailer, BR: Barrier, PD: Pedestrian, BI: Bicycle Table 2: Attack performance of the object-oriented adversarial patch attack. | Models | Targeted object | Other objects | |------------|-----------------|--------------| | | Ben. Score | Adv. Score | Diff. | Ben. mAP | Adv. mAP | Diff. | | TF | 0.655 | 0.070 | 89.24%| 0.921 | 0.923 | 0.30%| | DI | 0.658 | 0.110 | 83.32%| 0.964 | 0.965 | 0.13%| | UVTR | 0.894 | 0.189 | 78.83%| 0.963 | 0.963 | 0.00%| | PointAug | 0.734 | 0.177 | 75.80%| 0.954 | 0.955 | 0.10%| | BF-MIT | 0.714 | 0.219 | 69.37%| 0.965 | 0.968 | 0.34%| | BF-PKU | 0.712 | 0.168 | 76.38%| 0.956 | 0.958 | 0.13%| | Average | 0.728 | 0.156 | 78.63%| 0.954 | 0.955 | 0.17%| | BFM | 0.955 | 0.095 | 90.02%| 0.578 | 0.571 | 1.08%| Table 3: Physical-world attack performance. | Pedestrian ID | Original | Benign | Adversarial | Difference | |---------------|----------|--------|-------------|------------| | 1 | 0.685 | 0.693 | 0.194 | 72.01% | | 2 | 0.674 | 0.642 | 0.219 | 65.89% | | 3 | 0.659 | 0.681 | 0.237 | 65.20% | | Average | 0.673 | 0.672 | 0.217 | 67.76% | Areas of objects like vehicles and pedestrians. This suggests that attacks on objects could prove to be more effective, whereas non-object areas such as the road and walls are more resistant. The following two models (i.e., BEVFusion-MIT [Liu et al., 2023b] and BEVFusion-PKU [Liang et al., 2022]) demonstrate high sensitivities throughout the entire scene, irrespective of objects or background regions. This indicates their vulnerability at a global level. Our technique also works on camera-only models. As shown in the last row, the camera-only model (i.e., BEVFormer [Li et al., 2022c]) demonstrates higher sensitivity in the object area and it is also classified as object sensitivity according to Equation 4. Since different sensitivity types demonstrate distinct vulnerability patterns, we discuss the reason behind in our defense discussion (Appendix N). 5.2 SCENE-ORIENTED ATTACKS Scene-oriented attacks are primarily aimed at fusion models with global sensitivity. Such models are vulnerable to adversarial patches placed on non-object background structures (e.g., the road). Our attack is universal, which can affect the detection of arbitrary dynamic objects in a given scene, even those that were not initially present during patch generation (training). Therefore, our attack is more practical in real-world scenarios as attackers can effortlessly paste generated patches onto the ground, rendering victim vehicles in close proximity blind. This could pose a great risk to pedestrians and surrounding vehicles. Detailed experimental setups can be found in Appendix H. Table 1 presents the quantitative results of our evaluation and qualitative examples can be found in Appendix I. In Table 1, the first column shows various models, the third column presents the mAP of object detection results in the test set, and the subsequent columns denote the average precision (AP) of different objects categories. We report the subject model’s benign performance (no patch), adversarial performance (patch applied) and their difference in percentage (attack performance) for each model. Our findings indicate that the detection accuracy of the two globally sensitive models (i.e., BEVFusion-PKU and BEVFusion-MIT) has considerably decreased, for all object categories. The mAP decreased more than 35%. However, the other five models with object sensitivity remain unaffected. These results align with our conclusion in Section 5.1 and further reveal the vulnerability of globally sensitive models to more influential scene-oriented attacks. Additionally, our experiment confirms the robustness of object-sensitive models under attacks in non-object background areas. In comparison, the camera-based model (i.e., BEVFormer) demonstrates worse benign performance than all fusion-based models, but it is also robust to scene-oriented attacks due to its object-sensitive nature. Demo video is available at https://youtu.be/xhXtzDezeaM. 5.3 OBJECT-ORIENTED ATTACKS Object-oriented attacks target object-sensitive models that are more robust to attacks in non-object background areas. The influence of this attack is more localized, as opposed to the scene-oriented attacks. It concentrates the impact on a specific target object, leaving the detection of other objects unaltered. This approach offers a higher degree of customization for attackers, enabling them to manipulate the impact at the object level rather than the entire scene. Detailed experimental setups can be found in Appendix H. Our evaluation results are presented in Table 2 and qualitative examples are in Appendix I. As shown, the first column represents various fusion models and a camera-only model for comparison, the second to fourth columns display the average detection score of the target object, and the fifth to seventh columns indicate the mAP of other objects (including car, bus, pedestrian and motorcycle). The results demonstrate a substantial decrease in the target object’s detection scores for fusion models, from 0.728 to 0.156 on average, thus validating the efficacy of our object-oriented adversarial attacks across all models regardless of the fusion strategies. Furthermore, the detection results of other objects in the scene remain virtually unaffected, as evidenced by the negligible change in mAP. This phenomenon also holds for the camera-only model, and it shows worse benign mAP and more performance degradation under attack. Videos of the attack can be found at https://youtu.be/xhXtzDezeaM. 5.4 PRACTICALITY To assess the practicality of our single-modal attacks on fusion models, we conducted experiments in both simulated and physical-world environments. Attacks in simulation can be found in Appendix K. We assess the feasibility of our attack in a real-world setting by replacing the front-view images of 30 data frames in the dataset with our custom images taken in the physical world, leaving other views and LiDAR data unchanged. To ensure the compatibility of the dataset’s LiDAR data with our custom images, we maintain the 3D geometry of our physical scenario consistent with the original dataset images. Figure 6b and Figure 6c illustrate an original image and a custom scenario in our experiment respectively. Note that both images maintain similar 3D geometry, with pedestrians crossing the road located at similar positions in both cases. Detailed setups are in Appendix H. Figure 6b exhibits the experimental equipment, while Table 3 details the attack performance. The pedestrian ID, corresponding to the pedestrians in Figure 6b, is denoted in the first column of Table 3, with the subsequent columns reporting the average detection scores in original dataset images (Figure 6b), our benign physical-world images (Figure 6c), and the adversarial physical-world images (Figure 6d). The last column denotes the difference between benign and adversarial physical-world performance. The comparative detection scores in our benign physical-world images and the original dataset images validate the consistency between the original LiDAR data and our custom images, thereby substantiating the efficacy of our image replacement method. Furthermore, the deployment of the adversarial patch results in a significant reduction in the pedestrian detection scores, emphasizing the practicality and effectiveness of our attack strategy in the physical world. We discuss the implications on AV security in Appendix Q. Ablation Studies and Defense Discussions. We conducted ablation studies about the attack performance of various distance and viewing angles of the adversarial patch (Appendix L), and various granularity of the sensitivity heatmap (Appendix M). We discussed both architectural-level defense and DNN-level defense strategies in Appendix N, and the limitations are discussed in Appendix O. 6 CONCLUSION We leverage the affordable adversarial patch to attack the less significant camera modality in 3D object detection. The proposed optimization-based two-stage attack framework can provide a comprehensive assessment of image areas susceptible to adversarial attacks through a sensitivity heatmap, and can successfully attack six state-of-the-art camera-LiDAR fusion-based and one camera-only models on a real-world dataset with customized attack strategies. Results show that the adversarial patch generated by our attack can effectively decrease the mAP of detection performance from 0.824 to 0.353 or reduce the detection score of a target object from 0.728 to 0.156 on average. 7 ETHICS STATEMENT Most of our experiments are conducted in the digital space or in a simulated environment. Our physical-world study involving human subjects underwent thorough scrutiny and approval by an institutional IRB. Notably, we conducted physical experiments in a controlled environment on a closed road, utilizing a camera and tripod to capture scenes instead of employing real cars, as elucidated in Appendix H. This deliberate choice minimizes potential threats to the safety of participants. Stringent protocols were implemented, including participants not facing the camera, wearing masks during experiments, and blurring their faces in the photos. No identifiable information from the volunteers is retained by the researchers. 8 ACKNOWLEDGEMENTS This research was supported, in part by IARPA TrojAI W911NF-19-S-0012, NSF 2242243, 1901242 and 1910300, ONR N000141712045, N00014-1410468 and N000141712947, National Research Foundation of Korea(NRF) grant funded by the Korean government (MSIT) RS-2023-00209836. REFERENCES Mazen Abdelfattah, Kaiwen Yuan, Z Jane Wang, and Rabab Ward. Adversarial attacks on camera-lidar models for 3d car detection. In *IROS*, 2021a. Mazen Abdelfattah, Kaiwen Yuan, Z Jane Wang, and Rabab Ward. Towards universal physical attacks on cascaded camera-lidar 3d object detection models. In *ICIP*, 2021b. AIDay. Tesla Autopilot Uses Transformer, 2022. [https://youtu.be/j0z4FweCy4M?t=3621](https://youtu.be/j0z4FweCy4M?t=3621) Rhett Allain. What Is the Angular Field of View for an iPhone 13?, 2022. [https://rjalla.in.medium.com/what-is-the-angular-field-of-view-for-an-iphone-13-199969482531](https://rjalla.in.medium.com/what-is-the-angular-field-of-view-for-an-iphone-13-199969482531) Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. In *ICML*, 2018. Autoware. Autoware. [https://www.autoware.org/](https://www.autoware.org/) Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu, and Chiew-Lan Tai. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. In *CVPR*, 2022. BaiduApollo. Baidu Apollo. [https://apollo.auto/index.html](https://apollo.auto/index.html) Adith Boloor, Karthik Garimella, Xin He, Christopher Gill, Yevgeniy Vorobeychik, and Xuan Zhang. Attacking vision-based perception in end-to-end autonomous driving models. *Journal of Systems Architecture*, 2020. Tom B Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. Adversarial patch. *arXiv preprint arXiv:1712.09665*, 2017. Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In *CVPR*, 2020. Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, and Z Morley Mao. Adversarial sensor attack on lidar-based perception in autonomous driving. In *CCS*, 2019. Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin Fang, Ruigang Yang, Qi Alfred Chen, Mingyan Liu, and Bo Li. Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks. In *S&P*, 2021.
CbmAtAmQla
For PR used in the win rate calculation, how will you determine whether it has converged? I only see the number of iterations in Algorithm 2 in Appendix E. Have you ever analyzed the impact of the number of iterations on the results?
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations Anonymous authors Paper under double-blind review Abstract Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs for reference-free evaluation of open-ended question answering. More specifically, they use the recognized “strongest” LLM as the evaluator, which conducts pairwise comparisons of candidate models’ answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho & MacArthur [2011]; Walsh [2014]) to improve LLM-based evaluations. Specifically, we propose (1) the peer rank (PR) algorithm that takes into account each peer LLM’s pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model’s name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans. 1 Introduction With a rising number of large language models (LLMs) being developed ever more quickly recently, evaluations become increasingly important as they encode values and priorities that the LLM community should improve upon (Jones & Galliers [1995]; Liang et al. [2022]). At the same time, the evaluation becomes harder as well. For example, recent models finetuned with human feedback (RLHF) align with human preference more, but this capability usually cannot be reflected by decent performance on standard NLP benchmarks (e.g., MMLU (Hendrycks et al. [2020]), ARC (Clark et al. [2018])). Furthermore, human queries span a diverse range of settings and scenarios, making it nearly impossible to list them all. To tackle this discrepancy, open-ended questions are being used more often to test LLMs performance (Chiang et al. [2023]). Then, by default, evaluation is done by collecting human preferences of pairwise comparisons and then calculating scores for each LLM to induce a general ranking. Yet the collection process is costly and time-consuming (Zheng et al. [2023]), to automate and scale up the evaluation, most recent works utilize the state-of-the-art LLM as the judge (Dubois et al. [2023]). However, various studies show that this method is problematic, as the pairwise comparison judgment provided usually contains various biases, such as favoring LLMs’ own answers. Motivated by these limitations, we propose the idea of peer evaluation. The goal is to mitigate the biases in automated evaluations while still benefiting from LLM’s strong capability in reading and writing reviews. We propose Peer Rank and Discussion-based evaluation framework (PRD). The suit consists of two alternatives that share the same format and goal – involving peer LLMs’ participation as reviewers to reach a more fair evaluation result where all peers mutually agree. We draw insights and lessons from educational psychology research on methodologies of student peer reviewing (Walsh [2014]), as well as their impact and benefits (Cho & MacArthur [2011]; Yalch et al. [2019]). More specifically, peer rank (PR) works for the tournament-style benchmarking setting where each LLM in pairwise matches produces an answer for an open-ended question. Instead of getting the average/majority vote to decide the final preference scoring, we propose to apply higher weights to LLMs reviewers with stronger capabilities. Peer discussion (PD) works for the general pairwise comparison setting. Given two candidate answers, we prompt two other reviewer LLMs to have multi-turn discussions to reach a mutual agreement on the pairwise scoring or preference. The process shares a similar format of LLM interacting with each other through conversations like two communicative agents (Li et al., 2023; Park et al., 2023; Fu et al., 2023b). We conduct extensive experiments and analysis for measuring PR and PD’s capabilities of providing fair pairwise comparisons. PR is tested on Vicuna80, which contains pairwise judgments from human annotators. Our method improves correlations with human judgments and ranking substantially. This paradigm also enables a group of LLMs to induce a self-ranking. PD is tested on both Vicuna80 and LFQA (Xu et al., 2023), which includes annotated pairwise comparisons of Human-Machine and Machine-Machine answers. PD enables LLMs to achieve better pairwise comparisons that are more accurate than single model-based reviews. Both PR and PD mitigate above-mentioned biases especially self-enhancement bias significantly. Further, we provide more analysis for peer discussions, which show: (1) the reviewer LLM model leading discussions is less likely to alter its opinion; (2) stronger LLMs are more likely to hold their opinions. 2 RELATED WORK Automatic Evaluations NLG evaluation methods are mainly of a similarity-based or reference-free type. For similarity-based metrics, the generated texts are compared to reference texts. They can be divided into lexical overlap-based (Papineni et al., 2002; Lin, 2004) and contextualized embedding-based (Zhang et al., 2019) evaluators. In parallel, people have also developed task-specific metrics such as consistency (Kryscinski et al., 2020; Wang et al., 2020), faithfulness (Fabbri et al., 2022; Gao et al., 2023) and coherence (Dziri et al., 2019). This is similar to our peer discussion idea on designing more specific prompts for large language model-based evaluations. Our prompting-based method is more flexible and can act as a unified evaluator (Zhong et al., 2022). Specifically, for long-form or open-ended question answering, early work uses ROUGE to measure the similarity between human and machine-generated answers. However, researchers find that ROUGE is not a fair metric for quality measurement due to the open-ended nature of long-form answers (Krishna et al., 2021; Xu et al., 2023). Fu et al. (2023a) propose GPTScore, which evaluates texts with generative pre-training models like GPT-3. Xu et al. (2023) implements a similar idea for evaluating long-form answers. Given a prompt consisting of a question with two answer candidates, GPT-3 is fine-tuned to output the label answer1 and answer2. Differing from above, it produces pairwise comparisons – preference scores. LLMs as evaluators: problems and challenges Most recently, with the trend of developing open-source LLMs, evaluations for benchmarking the progress have become even more important but also more difficult. Apart from testing on standard datasets such as MMLU (Hendrycks et al., 2020), they are often tested on open-ended questions, which are much more prevalent in real life (Nakano et al., 2021; Chiang et al., 2023). People mostly use GPT-4 (Liu et al., 2023; OpenAI, 2023) as an evaluator for either generating scores or pairwise comparisons (Wang et al., 2023b; Zhou et al., 2023). However, such a strategy has fundamental problems because of various biases, such as (1) positional bias (Dettmers et al., 2023; Wang et al., 2023a), where a model favors the first answer in pairwise comparisons; (2) verbosity and length bias (Zheng et al., 2023; Wang et al., 2023b); (3) and most importantly, self-enhancement bias, where an LLM favors its own answers (Liu et al., 2023; Zheng et al., 2023). Efforts have been proposed to tackle them: (1) Using position switching (Wang et al., 2023a) for mitigating positional bias; (2) Zheng et al. (2023) proposes Chatbot Arena, where real users ask questions and provide pairwise judgments of answers generated by two LLMs. But this is time-consuming and costly to ensure fairness – requiring expert-level annotations of pair comparisons. (3) Concurrent to our work, Bai et al. (2023) propose using each language model as an examiner, where each LLM generates questions to test other models. Different from peer evaluation, their “exams” are decentralized and biased with randomly generated questions. Moreover, all of the above works do not support inducing self-rankings through peer ranking. Figure 1: The peer rank process (PR), where each LLM model acts both as reviewers (A, B, C) and contestants (1, 2, 3). From the battles between contestants (pairwise comparisons), it induces a self-ranking. In this example, models A, B, C represent GPT-4, Bard, and Claude, respectively. 3 METHODOLOGIES In general, peer rank can be applied to induce self-ranking – a ranking of a group of LLMs’ own capabilities. Peer discussion provides more benefits in comparing two models, which is more fine-grained and interactive. Both of them aim at reducing the bias in automatic evaluations. We elaborate on the technical details in this section. 3.1 PEER RANK AND SCORING We illustrate the peer rank algorithm in Figure 1. The general idea is to obtain weighted scores of each battle from the peer reviewer’s judgment, and then induce self-rankings from the scores. This process is iterated multiple times until the scores converge. Given a set of questions $Q$, we generate an answer to each question for each language model. Let $A_m(q)$ be the answer to question $q \in Q$ by model $m$. Each battle represents two models (the contestants) answering the same question $q$. The comparison of the answers in a battle by the LLM reviewer model $r$ forms a review. Let $K_r(x, y)$ be the score given by the reviewer $r$ to the pair of answers $(x, y)$. We use a score of $-1$ to indicate the first answer is better, $0$ to indicate a tie, and $1$ to indicate the second answer is better. Suppose we have a set of reviewer models $R$ and a set of contestant models $C$. We form a set of battle reviews, $B = \{(q, i, j, r, s) | q \in Q, (i, j) \in C^2, r \in R\}$, where $s = K_r(A_i(q), A_j(q))$ is the score given by reviewer $r$ to the answers/responses generated by $i$ and $j$ for question $q$. We create a shorthand $K^{ij}_r(q)$ for this review. Based on these peer reviews, we can evaluate models based on their performance by calculating metrics such as the win rate of each contestant and the Elo ratings of each contestant. Since each model is ranked by its peers, we call it Peer Rank. 3.1.1 WIN RATE CALCULATION The win rate for a contestant is the ratio of wins for that contestant divided by the number of battles it participates in. Ties are counted as 0.5 wins for both contestants. Our win rate calculation gives differing weight to the scores provided by different reviewers (A, B, C) based on the performance of the corresponding reviewers as a contestant (1, 2, 3). This operates on the assumption that models which are better contestants are also more fit to evaluate and compare answers, so they should be given more weight in evaluation (Equation 2). In another way, since the score is a measure of their ability to review/grade correctly, we weigh the win rate an LLM gives another LLM by their own score [Walsh, 2014]. Initially, all reviewers are given the same weight. On each iteration of the calculation, the win rate for each contestant is calculated using the current weights. The win rates are scaled to the range of [0, 1] using a linear scaling. Then, they are scaled again so that their sum is 1. Next, these results are used as the weights for the next round. Formally, let \( W^c_r \) be the raw win rate of contestant \( c \in C \) from the reviews of reviewer \( r \in R \). This is equal to the number of times \( c \) wins a battle plus half of the number of times \( c \) ties, divided by the number of battles \( c \) participates in. \[ W^c_r = \frac{\sum_q \sum_{d \in C, d \neq c} [f(K^{cd}_r(q)) + f(-K^{cd}_r(q))]}{2|Q|(|C| - 1)} \] where \( f(score) = \frac{score + 1}{2} \) maps a score of (loss = −1, tie = 0, win = 1) for the second contestant to a win count of (0, 0.5, 1), so that ties count as half of a win. Note that we negate \( K^{cd}_r(q) \) when inputting it into \( f \) so that the win value of \( c \) is computed instead of \( d \). Also, since there are \( |Q| \) questions, \( |C| - 1 \) contestants to battle, and 2 orders for two contestants to battle, there are \( 2|Q||C| - 1 \) battles involving a fixed contestant \( c \). Let \( \alpha^k_r \) be the weight assigned to reviewer \( r \) after iteration \( k \). Initially, \( \alpha^0_r = 1/|R| \), so that all reviewers have the same weight, and the weights add to 1. Namely, we assume each reviewer LLM has the same capabilities to start. The score of contestant \( c \in C \) for iteration \( k \) is the weighted average of the raw win rates for contestant \( c \). We set the weights for the next iteration to \( \alpha^k \): \[ \text{score}^k_c = \sum_{r \in R} \alpha^{k-1}_r \cdot W^c_r, \quad \alpha^k = \text{Normalize}(\text{MinMax}(\text{score}^k)) \] where the weights are scaled to a range of [0, 1] and finally normalized to have sum equal to 1: \[ \text{MinMax}(S) = \frac{S - \min_{r \in R}(S_r)}{\max_{r \in R}(S_r) - \min_{r \in R}(S_r)}, \quad \text{Normalize}(S) = \frac{S}{\sum_{r \in R} S_r} \] Given this set of equations, we look for the fixed/converging point of the framework. This process is reminiscent of the problem faced by the PageRank algorithm (Page et al., 1999). The detailed equivalent implementation of PR is shown in the Algorithm 2 in Appendix E. ### 3.1.2 Elo Calculation Another method for calculating the performance of a contestant relative to other contestants is the Elo rating (Elo, 1967; Askell et al., 2021). The Elo rating method takes a sequence of pairwise reviews and generates ratings for each contestant, with a greater rating indicating better performance. Based on a similar idea, we assign different weights to reviewers based on their previous performance such that a review from a higher-weight reviewer has a greater influence upon Elo ratings. Similarly to the win rates calculation, we start with equal weights on all reviewers and then normalize the resulting Elo ratings to give weights for the next iteration. We repeat the Elo calculation with the new weights, update the weights based on the new ratings, and continue repeating until it converges. A brief overview of the actual Elo ratings calculation follows. All contestants start out with an initial rating of 1000. On each battle, the expected likelihood of each contestant winning is calculated based on the difference between their Elo ratings. The Elo rating of the winner is increased, and the rating of the loser is decreased. The magnitude of the Elo ratings change is inversely related to the outcome’s likelihood. In our calculations, we weight reviewers so that reviews by a high-weight reviewer cause larger changes in Elo. For more details, please refer to Algorithm 1 in Appendix E. ### 3.2 Peer Discussions In peer discussion, we prompt two LLMs to discuss how to judge two candidate answers, trying to reach a final agreed review. In Figure 2, we demonstrate the peer discussion process between two LLMs reviewers (A and B). The input is a given question and two answers, which may be both generated by machines or one by humans and another by machines (e.g., GPT-3 v.s. human answers). They first conduct pairwise comparisons on answers separately, providing explanations and indicating their preferred answer by outputting the number 1 or 2 by the end (the prompt for Figure 2: The peer discussion process (PD). Blue and orange texts describe advantages of answer 1 and answer 2. In this example, finally, the two LLM reviewers reach the mutual agreement of selecting answer 1 (human-written answer), which correlates with human annotator preference. getting initial reviews is listed in Appendix 8. Then, the two models discuss for multiple turns until they reach a fixed number of turns. The specific prompt for discussion is listed in Table 1. At the very beginning, a system prompt (role prompt) tells models their role – whether it is reviewer A or reviewer B (e.g., Claude or GPT-4). Then, all information, including the question, two comparison answers, and the initial reviews, are listed line by line. The order of initial reviews is the same as that of reviewers in discussions. In other words, if reviewer A leads the discussion, reviewer A’s initial review is listed first. Right before the start of the discussion, the system prompt specifies the detailed requirements, which provide explicit aspects to focus on. Specifically, we draw insights from WebGPT Nakano et al. (2021)’s annotation guideline OpenAI (2022). For long-form question answering, it mainly focuses on (1) Unsupported information: detecting information with no support, assume the worst case: that all of it is false. This aspect is most important and often determines the overall rating; (2) Core information: about whether the question has actually been answered; (3) Coherence: generally, it is less important than the two above. Then the overall preference is finally determined. An alternative is to repeat the system Table 1: The discussion template for reviewer A at the third turn. All texts above are chat history and are used as input to reviewer A’s LLM model. Core aspects that we instruct the judging/reviewer model to focus on are in boldface. | models | GPT-4 Elo | Rank | All Elo | Rank | All (Weighted) Elo | Rank | Human Raters Elo | Rank | |----------|-----------|------|---------|------|-------------------|------|------------------|------| | GPT-4 | 1282 | 1 | 1165 | 1 | **1213** (-23) | 1 | 1236 | 1 | | Claude | 1150 | 2 | 1104 | 2 | **1125** (-2) | 2 | 1127 | 2 | | Vicuna | 883 | 3 | 930 | 3 | **912** (-8) | 3 | 920 | 3 | | GPT-3.5 | **878** (+10) | 4 | 919 | 4 | 894 | 4 | 868 | 4 | | Bard | 804 | 5 | 881 | 5 | **856** (+8) | 5 | 847 | 5 | | models | GPT-4 Win Rate | Rank | All Win Rate | Rank | All (Weighted) Win Rate | Rank | Human Raters Win Rate | Rank | |----------|----------------|------|--------------|------|------------------------|------|-----------------------|------| | GPT-4 | 0.856 | 1 | 0.749 | 1 | **0.802** (-0.020) | 1 | 0.822 | 1 | | Claude | 0.709 | 2 | 0.662 | 2 | **0.685** (-0.004) | 2 | 0.689 | 2 | | Vicuna | 0.348 | 3 | **0.393** (+0.004) | 3 | 0.376 | 3 | 0.389 | 3 | | GPT-3.5 | **0.342** (+0.028) | 4 | 0.375 | 4 | 0.346 | 4 | 0.314 | 4 | | Bard | 0.245 | 5 | 0.320 | 5 | **0.290** (+0.004) | 5 | 0.286 | 5 | Table 2: Global ranking correlation results. The upper table shows the correlation between LLM reviewer-based ranking and human rater’s ranking. Bottom table shows correlation between global win rates. **Boldfaced numbers** are the closest to scores from human raters. **Blue numbers** show the difference between the score from LLM reviewers and Human raters. For more detailed pairwise win rates, please refer to the heat maps in Appendix F. requirement prompt after each turn. It is to ensure that the models remember their role (reviewer 1 or 2) throughout the discussion history. 4 EXPERIMENTS AND ANALYSIS 4.1 DATASETS, SETUP, AND METRICS Datasets: We select two “meta-evaluation” datasets, LFQA (Xu et al., 2023) and Vicuna80, with human annotations for pairwise comparisons, to measure the correlation between our evaluation methods and human judgments. LFQA (Xu et al., 2023) contains 140 long-form questions across seven domains (e.g., economics, history, and biology) and two candidate answers (from either GPT3 or Human) for each. Vicuna80 is a more complete version of the Vicuna dataset (Chiang et al., 2023). For more details, please refer to the appendix A. Metrics: For experiments on PR, we follow metrics in Wang et al. (2023a). We first conduct example-level pairwise comparisons. Specifically, each evaluation example (pairwise comparison) consists of a question and a pair of long-form answers. We compare the model predicted preference score against gold human preference and report Accuracy and Fleiss’ $\kappa$. Following Dettmers et al. (2023), we also compare model-predicted global ranking scores against human-judged ranking scores. Specifically, we report Elo scores (Elo, 1967) and win rate (WR) based rankings (Table 2). We use All to denote our method where each reviewer has equal weights; and use All (weighted) to denote the setting where the final round weights to each reviewer. Besides experiments on PR and PD respectively (Section 4.2 and Section 4.3), we also compare PR and PD in an experiment of judging answer qualities of GPT-3.5 v.s. Vicuna-13b (Table 5). For experiments on PD, we use peer discussion accuracy (PDA) to describe the correlation of model discussion results compared to human annotators. PDA is calculated by the number of correct answers from peer discussion results over the number of all answers. A high PDA result indicates a better correlation with human judgments. Setup: For Vicuna-13b, we use the default version from Chiang et al. (2023). For all other API-based LLM models, we use specific versions of each. – we use GPT-4-0613, GPT-3.5-turbo-0613, Claude-1, and PaLM-2 for GPT-4, GPT-3.5, Claude, and Bard respectively. For more details, please refer to appendix D. 4.2 RESULTS FOR PEER RANK (PR) On the Vicuna80 dataset, we compare our PR method and representative LLM-based evaluation methods, such as GPT-4 and Claude. In Table 3, all reviewer combinations listed except Claude, when compared to human reviews at an example level, display a Fleiss $k$ of around 0.40, indicating fair to moderate agreement. There is a significant difference in accuracy between LLM reviewers. The worst reviewer is Claude, with an accuracy of only 60.69%. The best individual reviewer is GPT-4, with an accuracy of 64.25%. The combination of reviewers (PR) increases this accuracy by a few percentage points, with our PR approach being highest at 67.31%. Inspecting give the same ranking of models: i.e., GPT-4 > Claude > Vicuna > GPT-3.5 > Bard. Thus it shows that a weighted peer ranking provides more accurate evaluation of language models at the level of the global performance of models. At the example level, a weighted peer ranking also provides higher accuracy and a minimally higher agreement with human reviews. However, in terms of the Elo ratings provided by the human reviews, we clearly observe that GPT-4 clearly favors its own answers and is prone to self-enhancement bias. The method that produces the closest Elo ratings is our approach of the weighted combination of all reviewers (“All weighted”). Furthermore, the method that produces the closest win rates (less than a 1% difference for many contestants) is also All weighted. In the beginning, when the weight is the same for every reviewer (weights equal to one), the win rate given by “All reviewers” is low at about 0.749 partially because each reviewer is treated equally so that each reviewer might have a preference for its own answer. After several rounds/iterations, the final win rate becomes more fair. We display the final round weights in Figure 3. | Reviewer | Fleiss Kappa | Accuracy | |---------------------------|--------------|----------| | GPT-3.5 | 0.387 | 0.621 | | Claude | 0.319 | 0.607 | | GPT-4 | 0.406 | 0.643 | | GPT-4 & Claude & GPT-3.5 | 0.403 | 0.666 | | All Reviewers (Weighted) | 0.410 | **0.673**| Table 3: Example-level correlation results for peer rank. For the fourth and fifth rows, we take the peer reviewers’ majority vote weighted by win rate. Table 2: all combinations of ranking methods listed give the same ranking of models: i.e., GPT-4 > Claude > Vicuna > GPT-3.5 > Bard. Thus it shows that a weighted peer ranking provides more accurate evaluation of language models at the level of the global performance of models. At the example level, a weighted peer ranking also provides higher accuracy and a minimally higher agreement with human reviews. Lastly, in Figure 4, we draw the line chart of how the GPT-4 Elo score changes as more battles are fed to the Elo algorithm. GPT-4’s score takes off as battle number increases. We can observe that GPT-4 displays self-enhancement across the entire process, while our PR approach-base evaluation correlates with human pairwise comparisons well. 4.3 Results for Peer Discussions (PD) Prompt for Discussion By preliminary study, we find that the template asking for explicit aspects, such as core information, unsupported information, and coherence, can essentially help LLM reviewers generate valuable and informative reviews which correlate better with human annotators. For more details regarding picking the prompts, please refer to the appendix C. | | R1 | R2 | R1 lead | R2 lead | Random | |----------------|------|------|---------|---------|--------| | GPT4 & Claude | 0.729| 0.671| **0.743**| 0.729 | 0.729 | | GPT4 & GPT35 | 0.729| 0.579| 0.714 | **0.750**| 0.731 | | GPT35 & Claude | 0.579| 0.671| **0.700**| 0.671 | 0.686 | | GPT35 & GPT35-0.8 | 0.579| 0.650| 0.664 | **0.686**| 0.681 | | Claude & Claude-0.8 | 0.664| **0.707**| 0.693 | 0.671 | 0.680 | | GTP4 & GPT4-0.8 | 0.779| 0.757| **0.779**| 0.757 | 0.779 | Table 4: Peer Discussion Accuracy on LFQA. **General Accuracy** In Table 4, we report the peer discussion accuracy (PDA) of multiple combinations of reviewers discussions results on LFQA. We observe: (1) when two reviewer LLMs are of similar capabilities (e.g., GPT-4 and Claude), there are likely relatively large improvements upon their initial reviews; (2) when there is a substantial gap between reviewer capabilities (e.g., GPT-4 and GPT-35), the PDA is usually below the stronger reviewer model’s initial accuracy and higher than the weaker model’s; (3) when models “self-discuss”, for example, we create two variants of the same model by setting different temperatures and prompt them to discuss, weaker models (e.g., GPT-3.5) can substantially “self-improve”. GPT-4’s self-discussion brings little improvements. Future investigations on how to design better self-discussion strategies would be worth working on. In Table 5, we report results on comparisons of GPT-3.5 v.s. Vicuna13b answers to Vicuna80 questions, we see the GPT-4&Claude discussion increases the accuracy by over 1.5%. Also, we add the accuracy of the PR method in the Table. We find that the review becomes substantially better after weighted scoring. **Peer discussions help mitigate self-enhancement bias** According to we previously discovered, large language models (LLMs) endure self-enhancement bias when acting as judges – preferring the answers they generate or that of the models under the same series (e.g., GPT-4 and GPT-3). We conduct experiments on the subset of LFQA questions where we have human-annotated pairwise comparisons between Human and Machine-generated (GPT-3 text-davinci-002) answers. Table 6 shows the win rates of GPT-3 judged by humans and three LLMs. We report the LLMs initial and after-discussion preferences. Both GPT-3.5 and Claude highly prefer GPT-3’s answers in their initial reviews. Specifically, GPT-3.5 significantly favors GPT-3 answers with a 13.79% higher win rate. After discussing with other LLMs, all models align better with humans. Before discussions, GPT-4’s initial preference aligns well with humans and is almost the same as humans after peer discussions, – which proves it’s not favoring GPT-3 much and is more fair. **Peer discussions help mitigate position bias** Human-annotated pairwise comparisons are not affected by the position of answers. As indicated by recent work of Wang et al. (2023a) and Dettmers et al. (2023), LLMs are prone to position bias, describing that LLMs tend to show a preference for specific positions, even when prompted not to do so (Table 8 in Appendix). In Table 7, the win rate of GPT-3 is highly affected by its position when models generate initial reviews. GPT-3.5 highly prefers the answer in the first position compared to Claude and GPT-4. The GPT-3 win rate calculated by GPT-3.5 is 15.79% higher than the win rate based on human-annotated pairwise comparisons when GPT-3 appears first (73.68 vs 57.89). After peer discussion, all LLM reviewers have closer preferences to humans. Second, all LLMs’ scores for GPT-3 answers of both positions are closer as well, indicating that the position bias is mitigated after peer discussions. | | Initial Preference | After Discussion | |----------------|--------------------|------------------| | Human | 58.67% | | | GPT-3.5 | 72.46% | 62.22% | | Claude | 63.81% | 60.28% | | GPT-4 | 55.50% | 58.75% | Table 6: GPT-3 answer win rates judged by different reviewers on LFQA. For all LLM reviewers, we take the average accuracy of all discussions they participate in. Self-enhancement exists and is mitigated by PD. | | Initial Preference | After Discussion | |----------------|--------------------|------------------| | Human | | | | GPT-3.5 | 57.89% | 59.46% | | Claude | 73.68% | 64.41% | | GPT-4 | 63.16% | 55.70% | | GPT-3 First | 54.51% | 56.37% | | Human First | 57.89% | 59.46% | | GPT-3 First | 67.11% | 58.56% | | Human First | 55.70% | 55.41% | | GPT-3 First | 58.27% | 58.30% | Table 7: GPT-3 win rate (in the GPT-3 vs Human battles). Figure 5: The position bias of all three LLMs’ initial and after peer discussion reviews. Human has an equivalent preference for either position (red dotted line). From another perspective, Figure 5 shows the global preference of selecting answers at the first or second positions across different LLM reviewers. Overall, GPT-3.5 prefers answers at the first position. The other two models favor answers in the second position, similar to the position bias shown in Table 7. After peer discussion, it shows the same trend of mitigating position bias as well. 5 FURTHER ANALYSIS The reviewer who leads the discussion tends to hold its opinion. In a discussion between two LLM reviewers, we define the reviewer who leads the discussion as the leader and the other reviewer as the follower. We find that leaders are less likely to be convinced by followers when they insist on their own opinions at the first turn. We name it “Discussion Ordering Effect”. We observe this effect in discussions over the LFQA questions. We define two phenomenons which may happen during the discussions: (1) Opinion altering (OA): a reviewer changing its opinion after discussing with another reviewer. For example, R2 posts its preference at turn 2, which is different from R1’s preference at turn 1, then R1 changes its preference at turn 3 that agrees with R2; (2) Opinion holding (OH): a reviewer does not change its opinion even if another reviewer disagrees. For example, R1 posts its preference at turn 1 while R2 disagrees with R1 at turn 2; then, R1 still holds its preference at turn 3. As shown in Figure 6, all models have OA when they are in the follower position, while their number of OA decreases significantly after they switch to the leader position. This implies that the discussion ordering effect exists. On the pairwise comparisons of LFQA where two reviewers initially disagree: when in the leader position, GPT-4 has zero OA, and Claude has two OAs (happens during the discussions with GPT-3.5). When GPT-4 discusses with Claude, both of them hold their initial preferences when they are in the leader position. Stronger LLMs tend to hold their opinions As from Figure 6, we add up the green mass (OH total) for each LLM reviewer to obtain their OH cases in both orderings. We see that models that are commonly recognized as being stronger (e.g. GPT-4) are more firm in their reviews and hold their opinions. For example, GPT-3.5 changes its opinion most often, and GPT-4 usually holds its opinion. More specifically, GPT-4 holds its opinion in 174 discussions, while Claude and GPT-3.5 hold only in 94 and 76 discussions, respectively. 6 CONCLUSION In this work, we provide promising prospects of using a peer evaluation method to improve LLM-based evaluations. Our framework mitigates potential bias (e.g. self-enhancement, positional) in previous prevalent methods. Our proposed peer rank process provides a more fair ranking of model capabilities. The peer discussion process helps models to reach mutual agreements that correlate with human preference more. In the future, we plan to investigate how the general peer evaluation process benefits the LLMs in learning to access their own answer and answering new questions [Nicol et al.] (2014). REPRODUCIBILITY STATEMENT In accordance with the commitment to transparency and reproducibility, we provide Python codes to ensure the replicability of our research. Python codes for peer rank and peer discussion are uploaded with the paper as supplementary material. The Python package requirement file is also included. All essential folders have corresponding README files inside. Both Vicuna80 and LFQA datasets are under the “data” folder. We have examined and confirmed no potentially harmful insight exists in the datasets we published. We believe these measures, along with the comprehensive information in the main paper, appendix, and supplementary materials, will empower researchers to replicate and build upon our work effectively. Our commitment to reproducibility aligns with the standards of ICLR, and we are dedicated to supporting the scientific community in advancing the field. REFERENCES Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. *arXiv preprint arXiv:2112.00861*, 2021. Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, et al. Benchmarking foundation models with language-model-as-an-examiner. *arXiv preprint arXiv:2306.04181*, 2023. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, and et.al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, https://lmsys.org/blog/2023-03-30-vicuna/. See https://vicuna.lmsys.org (accessed 14 April 2023), 2023. Kwangsu Cho and Charles MacArthur. Learning by reviewing. *Journal of educational psychology*, 103(1):73, 2011. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. *arXiv preprint arXiv:1803.05457*, 2018. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. *arXiv preprint arXiv:2305.14314*, 2023. Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. *arXiv preprint arXiv:2305.14387*, 2023. Nouha Dziri, Ehsan Kamalloo, Kory Mathewson, and Osmar R Zaiane. Evaluating coherence in dialogue systems using entailment. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pp. 3806–3812, 2019. Arpad E Elo. The proposed uscf rating system. its development, theory, and applications. *Chess Life*, 22(8):242–247, 1967. Alexander Richard Fabbrì, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. Qafacteval: Improved qa-based factual consistency evaluation for summarization. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pp. 2587–2601, 2022. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. Eli5: Long form question answering. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pp. 3558–3567, 2019. Joseph L Fleiss. Measuring nominal scale agreement among many raters. *Psychological bulletin*, 76(5):378, 1971. Joseph L Fleiss, Bruce Levin, and Myunghee Cho Paik. *Statistical methods for rates and proportions*. john wiley & sons, 2013.
CppEmee0u6
The author claims that incorporating the auxiliary modality would improve the model's performance on the target modality, even if there is no any relevance between the data. However, the reasoning behind this enhancement in performance remains unexplained
MULTIMODAL PATHWAY: IMPROVE TRANSFORMERS WITH IRRELEVANT DATA FROM OTHER MODALITIES Anonymous authors Paper under double-blind review ABSTRACT We propose to improve transformers of a specific modality with irrelevant data from other modalities, e.g., improve an ImageNet model with audio or point cloud datasets. We would like to highlight that the data samples of the target modality are irrelevant to the other modalities, which distinguishes our method from other works utilizing paired (e.g., CLIP) or interleaved data of different modalities. We propose a methodology named Multimodal Pathway: given a target modality and a transformer designed for it, we use an auxiliary transformer trained with data of another modality and construct pathways to connect components of the two models so that data of the target modality can be processed by both models. In this way, we utilize the universal sequence-to-sequence modeling abilities of transformers obtained from two modalities. As a concrete implementation, we use a modality-specific tokenizer and task-specific head as usual but utilize the transformer blocks of the auxiliary model via a proposed method named Cross-Modal Re-parameterization, which exploits the auxiliary weights without any inference costs. We observe significant and consistent performance improvements with irrelevant data of image, point cloud, video, and audio. For example, on ImageNet-1K, a point-cloud-trained auxiliary transformer can improve an MAE-pretrained ViT by 0.6% and a ViT trained from scratch by 5.4%. The code and models will be publicly available. 1 INTRODUCTION Transformers are widely adopted in various tasks across modalities, such as text classification (Devlin et al., 2019), object detection (Carion et al., 2020), point cloud analysis (Zhao et al., 2021), and audio spectrogram recognition (Gong et al., 2021a). Apart from numerous unimodal tasks, transformers are also effective on multimodal data, e.g., CLIP (Radford et al., 2021) uses image-text pairs to achieve superior performance in image recognition. Transformers’ success in multiple modalities demonstrates their abilities to universally establish sequence-to-sequence modeling, given the input sequences (i.e., tokens) which can be seen as the universal embeddings of data of multiple modalities (Dosovitskiy et al., 2021; Carion et al., 2020; Xie et al., 2021; Zhao et al., 2021; Gong et al., 2021a; Wang et al., 2022b). For brevity, we refer to such ability as the universal modeling ability. We would like to note that CLIP (Radford et al., 2021) represents the significant success of a methodology that improves a model’s performance on a certain modality (i.e., image) with the help of data from another modality (i.e., text), but the limitation is also apparent - the data samples from the two modalities must be relevant (i.e., paired). This limitation seems so inevitable that it hardly attracts research interest from the literature. Taking another two modalities, image and audio, as an example, we may expect that training with image-audio pairs may help the model recognize images (if we build a dataset with enough image-audio pairs and re-design the model to use the audio labels as the supervision, just like CLIP does with image-text pairs), but it seems hard to believe that a pure audio dataset would improve a model’s performance on ImageNet classification without any relevance between the audio and image samples. In this paper, we propose to improve a transformer’s performance on a certain modality even with irrelevant data from another modality. The motivation is that we can see a training process on a certain modality as converting the data of the modality to sequences (i.e., tokens) and establishing sequence-to-sequence mappings with the transformer blocks. For a specific modality, we reckon that the trained model has knowledge encoded in the sequence-to-sequence modeling that can facilitate another modeling process whose input sequences are obtained from another modality. In other words, apart from the obvious modality-specific knowledge acquired through training on a specific modality, we seek the **modality-complementary knowledge of sequence-to-sequence modeling in transformers** and will show that it does exist. However, given a target modality, it seems difficult to design the model to utilize some irrelevant data of another modality because the data samples of different modalities (e.g., image and audio) may vary significantly in the semantics, data format, preprocessing, and it seems hardly possible to design a reasonable objective function since there is no relevance between any two samples. In this paper, we solve this problem by not directly mixing training data of two modalities but seeing a model trained on a specific unimodal dataset as a proxy of the corresponding modality. and using the model instead. Specifically, given a target modality and an auxiliary modality, we propose a framework named Multimodal Pathway to improve the performance on the target modality by using two transformers respectively trained with the unimodal data of the two modalities. We construct pathways across the components of the target and auxiliary models to exploit the modality-complementary knowledge encoded in the latter to help the former. Note that pathway is an abstract concept that may refer to any connection between the two models. We name such a model as Multimodal Pathway Transformer (M2PT) for brevity. This paper proposes a simple yet effective implementation of M2PT, where the key is the concrete implementation of pathways that connect the two models. As discussed above, thanks to the universal modeling ability, transformers on different modalities may have different tokenizers, but their main bodies (i.e., transformer blocks) may have the same structure. For a target model and an auxiliary model with the same structure of the main bodies, a layer in the main body of the former should have a counterpart in the latter. For example, the counterpart of the Query layer in the 9th block of the target model, which is the 9th Query layer in the auxiliary model, should exist, and they play a similar role in the two models. Considering this, we build the connections between the two models by augmenting every linear layer in the transformer blocks of the target model with its counterpart in the auxiliary model. We let the two layers take the same inputs and add up their outputs, as shown in Fig. 2 (middle). However, considering the budget on compute and latency, we desire an implementation of the Multimodal Pathway that realizes the pathways and makes good use of the auxiliary model but does not increase the inference cost, compared to a regular model trained on the target modality. We would like to note that the structure described above can be equivalently implemented by a re-parameterization method, which equivalently converts the connections between model structures (i.e., linear layers) into connections between the two models’ weights. Specifically, we construct a pathway for each target linear layer by adding the corresponding weights of its counterpart in the trained auxiliary model scaled by a learnable multiplier that indicates the strength of the pathway, so that the method is named Cross-Modal Re-parameterization. A significant strength of re-parameterization is that we can merge the weights after training so that the structure and number of parameters of the resultant model will be identical to a regular model. We conduct experiments across the image, video, point cloud, and audio modalities. Figure 3 shows that M2PT brings consistent improvements among 4 modalities. In specific, with a base-scale transformer, M2PT achieves 83.9% (+0.6) top-1 accuracy on ImageNet-1K, 82.3% (+0.8) on Kinectis-400, 47.6% (+2.7) mIoU on PartNet, and 35.6% (+0.3) on Audioset. Such results demonstrate that M2PT effectively improves transformers with irrelevant data from other modalities. In summary, our contributions are as follows: • We propose Multimodal Pathway, which is a framework to improve transformers via exploiting models trained on other modalities. • We propose an inference-cost-free implementation of Multimodal Pathway, which is named Cross-Modal Re-parameterization. • Multimodal Pathway represents an early exploration in this direction, which offers a novel perspective. We realize significant and consistent improvements on four representative modalities, which demonstrates the potential of our method as a promising approach. 2 Method 2.1 Architectural Design We design a transformer for a specific modality as three modules - the modality-specific tokenizer, the modality-agnostic transformer blocks, and the modality-specific head. We assume the dimension of tokens is $D$, which is a pre-defined architectural hyper-parameter, and describe how to tokenize the input data of multiple modalities. **Image tokenizer.** Let’s consider an image represented by $\mathbf{x} \in \mathbb{R}^{H \times W \times C}$, where $(H, W)$ specifies the image’s resolution, and $C$ is the channel count. We transform this image into a series of 2D patches, denoted by $\mathbf{x}_p$, with dimensions $\mathbb{R}^{N_s \times (S^2 \cdot C)}$. In this representation, $S$ is the size of each patch, and $N_s$ (calculated as $HW/S^2$) indicates the total number of these patches. Following this transformation, we employ a projection layer to adjust the embedding dimension to $D$. $$\mathbf{x}_I \in \mathbb{R}^{C \times H \times W} \rightarrow \mathbf{x}'_I \in \mathbb{R}^{N_s \times (S^2 \cdot C)} \rightarrow \mathbf{x}''_I \in \mathbb{R}^{N_s \times D}. \quad (1)$$ **Video tokenizer.** Analogous to 2D images, we utilize video patches as the basic unit for learning video representations. Given a video $\mathbf{x} \in \mathbb{R}^{T \times C \times H \times W}$, we use the same patch size $S$ so that the video can be reshaped to $\mathbf{x} \in \mathbb{R}^{N'_s \times (S^2 \cdot C)}$, where $N'_s = \frac{T \times H \times W}{S^2}$. Upon deriving the video token sequence, a projection layer is also applied to obtain the video embeddings. $$\mathbf{x}_V \in \mathbb{R}^{T \times C \times H \times W} \rightarrow \mathbf{x}'_V \in \mathbb{R}^{N'_s \times (S^2 \cdot C)} \rightarrow \mathbf{x}''_V \in \mathbb{R}^{N'_s \times D}. \quad (2)$$ **Point cloud tokenizer.** Consider a point cloud $\mathcal{X} = \{\mathbf{x}_i\}_{i=1}^P$ comprising $P$ points. Each point $\mathbf{x}_i$ is defined as $\mathbf{x}_i = (\mathbf{p}_i, \mathbf{f}_i)$, where $\mathbf{p}_i \in \mathbb{R}^3$ denotes the 3D coordinates and $\mathbf{f}_i \in \mathbb{R}^c$ represents the feature of the $i$-th point. Typically, $\mathbf{f}_i$ encompasses visual cues such as color, viewpoint, normal, and so on. We utilize the Farthest Point Sampling (FPS) technique to sample a representative skeleton from the original point clouds at a fixed sampling ratio of 1/4. Subsequently, the $K$-Nearest Neighbor (KNN) method is employed to group proximate points. Leveraging grouped sets that encapsulate local geometric priors, we craft an adjacency matrix centered on the grouped subsets, aiming to extract the intricate structural details of 3D objects and scenes. Ultimately, the structural representations from $K$ subsets are aggregated. In essence, $$\mathbf{x}_P \in \mathbb{R}^{P \times (3+c)} \rightarrow \mathbf{x}'_P \in \mathbb{R}^{\frac{P}{K} \times \frac{P}{K}} \rightarrow \mathbf{x}''_P \in \mathbb{R}^{\frac{P}{K} \times D}. \quad (3)$$ **Audio spectrogram tokenizer.** Initially, we preprocess an audio waveform of duration $t$ seconds using the log Mel filterbank (Schneider et al., 2019). We then apply the Hamming window with a stride of $t_s$ on the frequency $f_s$, segmenting the original wave into $l = \frac{t}{t_s}$ intervals, thereby transforming the wave into an $l$-dimension filterbank. The spectrogram is subsequently divided into patches across both time and frequency dimensions, each of size $S$. Notably, unlike image patches, audio patches exhibit overlap on spectrograms. We partition the entire spectrogram into $N_s = 12 \left\lfloor \frac{100t-16}{10} \right\rfloor$ patches using a $S \times S$ (Denote in Eq. [1]) convolution, and then flatten these patches into token sequences. Given $T$ and $F$ as the time and frequency dimensions respectively, the procedure can be summarized as $$\mathbf{x}_A \in \mathbb{R}^{T \times F} \rightarrow \mathbf{x}'_A \in \mathbb{R}^{N_s \times S \times S} \rightarrow \mathbf{x}''_A \in \mathbb{R}^{(N_s \cdot D/S^2) \times D}. \quad (4)$$ **Transformer blocks.** We simply adopt the structural design of the transformer blocks in Vision Transformer (ViT) (Dosovitskiy et al., 2021), where each transformer block comprises a self-attention block and a Feed-Forward Network (FFN) block. The linear layers include the Query/Key-/Value/projection layers in the attention block and two layers in the FFN block. For fairness and reproducibility, we use the same architectural hyper-parameters (e.g., dimension of tokens, number of blocks, and number of heads) as ViT-Base for every M2PT model on every modality. 2.2 Cross-Modal Re-parameterization For an M2PT model on a specific modality, we use Cross-Modality Re-parameterization in the transformer blocks to utilize another model’s weights trained on another modality. Specifically, let $\theta$ be an arbitrary trainable parameter of a layer in the transformer, $x$ be the input, $y$ be the output, we use $f$ to denote the operation so that $y = f(x; \theta)$. With Cross-Modality Re-parameterization, we simply re-parameterize the layer with parameters of its counterpart in another modal that are trained on another modality. Let $\theta'$ be the parameter of the counterpart, the operation becomes $$y = f(x; \theta + \lambda \theta'). \quad (5)$$ We refer to $\lambda$ as the Cross-Modal Scale and $\theta'$ as the Cross-Modal Parameter. After training, we merge the model by computing and saving $\hat{\theta} = \theta + \lambda\theta'$ so that the model will no longer have extra parameters and the inference costs and model size will be identical to a regular model. With Cross-Modal Re-parameterization, we equivalently realize the proposed M2PT transformer block with marginal training costs and completely no inference costs. For a linear layer whose parameters form a matrix $W \in \mathbb{R}^{D_{in} \times D_{out}}$ and the inputs and outputs are matrices $x \in \mathbb{R}^{B \times D_{in}}$ and $y \in \mathbb{R}^{B \times D_{out}}$. We omit the bias term for brevity and the original operation is formulated by $$y = xW.$$ (6) As described in Fig. 2, the linear layer and its counterpart take the same input. The output will be $$y = xW + \lambda(xW').$$ (7) Note $$xW + \lambda(xW') = x(W + \lambda W'),$$ (8) so that the two layers can be equivalently implemented by a single layer that has a trainable scalar $\lambda$ and an additional trainable matrix which is initialized with the counterpart in the auxiliary model. Both the original weight matrix and the additional one are trainable. At each forward computation, the layer computes the equivalent weight matrix and then uses it to project the input, which is $$y = x(W + \lambda W').$$ (9) After training, we merge the parameters by computing $\hat{W} = W + \lambda W'$ and save it only. For inference, we simply construct a regular linear layer and load $\hat{W}$. In summary, to construct and use an M2PT with Cross-Modal Re-parameterization, we - Construct the tokenizer and head according to the target modality. - Construct the transformer blocks with Cross-Modal Re-parameterization. For each linear layer, except for the original weight matrix, we add an extra trainable weight matrix and initialize it with the corresponding one from a transformer trained on the auxiliary modality and add a trainable scalar parameter initialized with 0. - Train the re-parameterized cross-modal model just like we train a regular model. - After training, convert the trained model and save the converted one for inference. 3 EXPERIMENTS 3.1 SETUP Datasets. For image recognition, we evaluate the models’ performance on three representative image datasets. 1) ImageNet-1K (Deng et al., 2009) is the most widely adopted benchmark for visual perception tasks, which contains nearly 1.3 million images of 1000 categories. 2) MSCOCO 2017 (Lin et al., 2014) is a common benchmark for object detection. M2PT is trained on the train set and evaluated on the val set with Mask RCNN (He et al., 2017). 3) ADE-20K (Zhou et al., 2017) is used for semantic segmentation experiments with UperNet (Xiao et al., 2018) and we adopt the single-scale evaluation setting. For point cloud, we evaluate the performance of M2PT on ShapeNet-Part (Yi et al., 2016), which contains 16,880 models and 16 categories. For audio recognition, following AudioMAE (Huang et al., 2022), we utilize the AudioSet-2k (Gemmeke et al., 2017) dataset. For video, we experiment on the action recognition dataset, Kinetics-400 (Kay et al., 2017), which contains 240k training videos and 20k validation videos categorized into 400 classes. Experimental details. For a pair of target modality and auxiliary modality, we obtain the auxiliary model by self-supervised training on a dataset of the auxiliary modality. Specifically, the auxiliary image model is pretrained with MAE (He et al., 2022) on ImageNet-1K (Deng et al., 2009), the auxiliary point cloud model is pretrained with Point-MAE (Pang et al., 2022) on ShapeNet (Chang et al., 2015), the auxiliary audio model is pretrained with AudioMAE (Huang et al., 2022) on AudioSet-2M (Gemmeke et al., 2017), the auxiliary video model is pretrained with VideoMAE (Tong et al., Table 1: Experimental results on image recognition tasks. On ImageNet, we report the results with the linear layers in transformer blocks finetuned (tune acc) or fixed (fix acc). *: results are reported by running the official code. The architecture of every model is ViT-B. | Method | ImageNet | MS COCO | ADE20K | |-------------------------|----------|---------|--------| | | tune acc(%) | fix acc(%) | AP_{box}(%) | AP_{mask}(%) | mIoU(%) | | Pretrained setting | | | | | | | SemMAE [Li et al., 2022a] | 83.4 | 65.0 | - | - | 46.3 | | MFF [Liu et al., 2023] | 83.6 | 67.0 | 48.1 | 43.1 | 47.9 | | MAE* [He et al., 2022] | 83.3 | 65.6 | 47.3 | 42.4 | 46.1 | | M2PT-Video (Ours) | 83.6 ↑ 0.3 | 67.1 ↑ 1.5 | - | - | - | | M2PT-Audio (Ours) | 83.7 ↑ 0.4 | 67.3 ↑ 1.7 | - | - | - | | M2PT-Point (Ours) | 83.9 ↑ 0.6 | 67.8 ↑ 2.2 | 50.0 ↑ 2.7 | 44.0 ↑ 1.6 | 47.9 ↑ 1.8 | | From-scratch setting | | | | | | | ViT [Dosovitskiy et al., 2021] | 76.5 | 14.5 | 46.2 | 40.5 | 39.7 | | M2PT-Point (Ours) | 81.9 ↑ 5.4 | 19.5 ↑ 5.0 | 48.9 ↑ 2.7 | 42.2 ↑ 1.7 | 42.5 ↑ 2.8 | (2022) on Kinetics-700 (Kay et al., 2017). For the fairness and reproducibility, we use their official code for pretraining. We do not use supervised pretraining because we would like to eliminate the effects of labels in the pretraining datasets. In terms of the initialization of the target model, we adopt two settings. 1) The target model (i.e., the parameters denoted by $W$ in Eq. 9) is initialized with the aforementioned weights pretrained with the self-supervised methods on the target modality. We finetune the M2PT model with the default finetuning configurations described by the corresponding pretraining methods. The baseline model is also initialized with the pretrained weights and fine-tuned with identical configurations so that this setting is referred to as the pretrained setting for brevity. 2) The target model is randomly initialized as usual, and we use the widely adopted training configurations to train the M2PT model. The baseline model is trained from scratch with identical configurations for fair comparisons so that the setting is referred to as the from-scratch setting for brevity. In other words, the M2PT and the baseline models both have no weights pretrained on the target modality under this setting. 3.2 Main Results Image recognition. We first conduct a group of experiments under the pretrained setting, where the target weights are initialized with a ViT pretrained with MAE on ImageNet, and the auxiliary weights are from the models pretrained on video, audio, and point datasets, respectively. Such three models, which are labeled as M2PT-Video, M2PT-Audio, and M2PT-Point, respectively, and the baseline (the original MAE-pretrained ViT) are trained on ImageNet with the finetuning configurations originally adopted by MAE [He et al., 2022], and the resultant accuracies are reported in the “tune acc” column in Table 1. Then we transfer the best-performing model, which is M2PT-Point, to COCO object detection and ADE20K semantic segmentation tasks. The improvements are significant: on ImageNet, COCO, and ADE20K, the accuracy, box AP, and mIoU are improved by 0.6%, 2.7%, and 1.8%, respectively. Apart from finetuning the target and auxiliary weights, we test another setting where the parameters of linear weights in transformer blocks are fixed, and only the Cross-Modal Scales together with the classifier are trainable. The accuracies are reported in the “fix acc” column. Naturally, under this setting, the baseline should be the MAE-pretrained ViT where only the classifier is trainable. Impressively, the improvement increases to 2.2% (67.8% vs. 65.6%), demonstrating that the weights obtained from the auxiliary modality work on another modality, even when the weights are fixed. On the other hand, under the from-scratch setting, the baseline is a ViT trained from scratch, and the target weights of M2PT are also randomly initialized. The accuracy is drastically improved by 5.4% (81.9 vs. 76.5), suggesting the auxiliary weights significantly facilitate the training process. Intuitively, the Cross-Modal Scales are initialized with 0 but will soon become non-zero as the training proceeds so the model will be gradually influenced by the auxiliary weights and benefit Table 2: Experimental results on point cloud and audio recognition. For point cloud analysis, we compare M2PT with PointNet++ (Qi et al., 2017), Point-BERT (Yu et al., 2022), and Point-MLP (Ma et al., 2022). For audio recognition, we compare with PSLA (Gong et al., 2021b), AST (Gong et al., 2021a), (Gong et al., 2022), and AudioMAE (Huang et al., 2022). (a) Point cloud | Method | ShapeNetPart | PartNet | |-----------------|--------------|---------| | Pretrained setting | | | | PointNet++ | 81.9 | 85.1 | | Point-BERT | 84.1 | 85.6 | | Point-MLP | 84.6 | 86.1 | | Point-MAE | 84.2 | 86.1 | | M2PT-Video | 85.6 ± 1.4 | 87.5 ± 1.4 | | M2PT-Image | 85.6 ± 1.4 | 87.5 ± 1.4 | | M2PT-Audio | 85.6 ± 1.4 | 87.5 ± 1.4 | (b) Audio recognition | Method | Model | Top-1 Acc. (%) | |-----------------|--------------|----------------| | Pretrained setting | | | | PSLA | CNN+Trans | 31.9 | | AST | ViT-B | 34.7 | | SSAST | ViT-B | 31.0 | | AudioMAE | ViT-B | 35.3 | | M2PT-Point | ViT-B | 35.6 ± 0.3 | | M2PT-Video | ViT-B | 35.5 ± 0.2 | | M2PT-Image | ViT-B | 35.6 ± 0.3 | From-scratch setting | Method | Model | Top-1 Acc. (%) | |-----------------|--------------|----------------| | N/A | ViT-B | 11.0 | | M2PT-Point | ViT-B | 11.4 ± 0.4 | from the modality-complementary knowledge. When we transfer such two models to COCO and ADE20K, we observe consistent improvements of +1.7% AP and +2.8% mIoU. 3D Point Cloud Understanding. Table 2a presents the experimental results on ShapeNetPart and PartNet datasets, where we compare M2PT with existing point cloud pretraining methods such as Point-BERT (Pang et al., 2022) and Point-MAE (Yu et al., 2022). M2PT brings out +1.4 class mIoU and +1.4 instance mIoU. Similarly, under the from-scratch setting, we observe +0.6% and +0.4% improvements in the class and instance mIoU, respectively. Audio Understanding. For the pretrained setting, the target weights are initialized with an AudioMAE-pretrained model. As shown in Table 2b, we compare M2PT with existing methods including state-of-the-art pretraining-based methods SSAT and AudioMAE. M2PT brings out +0.3% top-1 accuracy on the Audioset balanced split, demonstrating that M2PT is also effective in audio recognition. Under the from-scratch setting, M2PT can bring out an improvement of +0.4%. Video Understanding. For the experiments on Kinetics-400, we adopt only the pretrained setting because it is not a common practice to train a model from scratch on a video dataset, which would deliver inferior performance. We use the VideoMAE-pretrained ViT to initialize the target weights. Naturally, the baseline should be the VideoMAE-pretrained model directly finetuned on Kinetics-400. Table 3 shows that compared with previous works including SlowFast (Feichtenhofer et al., 2019), MViTv2 (Li et al., 2022b), TimeSFormer (Bertasius et al., 2021), and VideoMAE (Tong et al., 2022), M2PT outperforms by at least +0.8% top-1 accuracy, which reveals that the temporal awareness for video understanding can also be enhanced with irrelevant data from other modalities. Table 3: Experimental results on the Kinetics-400 datasets. | Method | Model | Top-1 Acc. (%) | |-----------------|--------------|----------------| | SlowFast-101 | ResNet-101 | 79.8 | | MVITv2-B | ViT-B | 81.2 | | TimeSFormer | ViT-B | 80.7 | | VideoMAE | ViT-B | 81.5 | | M2PT-Point | ViT-B | 82.3 ± 0.8 | | M2PT-Image | ViT-B | 82.2 ± 0.7 | | M2PT-Audio | ViT-B | 82.2 ± 0.7 | 3.3 Ablation Studies We evaluate the design choices of M2PT separately through a group of ablation studies under the pretrained setting on ImageNet and the auxiliary modality is point cloud. Applying Cross-Modal Re-parameterization to every linear layer delivers the best performance. In each transformer block, we may choose to apply our method to any of the Query/Key-/Value/projection layers in the attention block and the two linear layers in the FFN. Table 4 shows that changing any one of the layers brings improvements, and the best result is achieved by changing them all. Table 4: **Ablation studies** on design choices of M2PT including the layers to re-parameterize and configurations of Cross-Modal Scale $\lambda$. The target dataset is ImageNet-1K and the auxiliary modality is point cloud. | Components | Cross-Modal Scale | Top-1 accuracy (%) | |------------|-------------------|--------------------| | Attn QKV | 0 | 83.4 | | Attn Proj | 0 | 83.6 | | FFN 1st | 0 | 83.6 | | FFN 2nd | 0 | 83.7 | | | 0 | 83.9 | | | $10^{-2}$ | 83.5 | | | $10^{-2}$ | 83.6 | | | $10^{-4}$ | 83.6 | | | $10^{-6}$ | 83.7 | Cross-Modal Scale should be initialized with 0. By default, we initialize the Cross-Modal Scale $\lambda$ with 0 for every layer. We observe that initializing it to a higher value degrades the performance, suggesting that the initial state of the M2PT should be identical to the target weights (i.e., the weights pretrained with MAE, in this case). Cross-Modal Scale should be learnable. Fixing the Cross-Modal Scale turns out to degrade the performance, suggesting that it is important to let the model learn how to combine the target weights and the corresponding auxiliary weights. ### 3.4 Empirical Discussions #### 3.4.1 Investigating the Modality-Complementary Knowledge The observed improvements on multiple modalities have shown that the auxiliary transformer has learned some knowledge that is able to transfer to the target modality. We continue to investigate the properties of such modality-complementary knowledge through two groups of experiments (Table 5). 1) We investigate if such knowledge is related to the ability to generally process hierarchical representations. Abstraction hierarchy exists in multiple modalities with concepts ranging from low-level to high-level, which may explain the transferability of the learned knowledge. For example, in image and point cloud, this hierarchy may include textures (in image) or individual points (in point cloud), object parts, and whole objects. Considering that the conceptual level a transformer block works on is determined by its depth, we design an experiment by reverting the order of the auxiliary weights. Specifically, the counterpart of the 1st target block should be the 1st auxiliary block, whose weights are connected via Cross-Modal Re-parameterization, which is obvious. However, since the transformer has 12 blocks, in this set of experiments we let the $i$-th block connect with the $(13 - i)$-th block so that the target-auxiliary correspondence is interrupted. We observe that doing so decreases the accuracy to 83.61%, which is 0.32% lower than the normal M2PT. In summary, the modality-complementary knowledge in the auxiliary transformer can transfer to another modality but can be harmed if the low-to-high correspondence is interrupted, suggesting that such knowledge may help understand general hierarchical concepts regardless of the modality. 2) We investigate if a better pretraining process brings such knowledge of higher quality. We experiment using not well-trained weights as the auxiliary weights. Specifically, the default auxiliary weights are obtained through a 300-epoch self-supervised pretraining process on point cloud data, but we alternatively use the checkpoints saved at the 20th and 220th epoch, respectively, as the auxiliary weights. Not surprisingly, we observe that the performance degrades to 83.55% and 83.69%, respectively, which is still higher than the baseline. Table 5: ImageNet accuracy with changed order of auxiliary weights or fewer pretraining epochs. | Order of aux weights | Epochs pretrained | Top-1 acc | |---------------------|------------------|-----------| | Normal | 20 | 83.55 | | Normal | 220 | 83.69 | | Normal | 300 | 83.93 | | Reversed | 300 | 83.61 | This phenomenon suggests that the improvements brought by the auxiliary weights cannot be simply explained that the weights trained on another modality merely offer an initialization hardly better than the random initialization (if so, training the checkpoint at the 220th epoch to 300 epochs would not bring observable eventual improvements on the target modality). ### 3.4.2 Discussion on Data Scale Previous works such as Image2Point (Xu et al., 2022) and Point-CLIP (Zhang et al., 2022) follow a common consensus that the modality owning a larger data scale could be utilized to benefit the other modality owning a smaller one. Therefore, Image2Point introduces image-pretrained models to data-insufficient 3D perception tasks. Differently, M2PT sets up a brand new methodology and breaks the former consensus: we discover that even though the data scale of point clouds is limited, such data still brings out impressive improvements to the image, video, and audio perception tasks. Impressively, The pretraining data of the latter modalities is larger in magnitude than that of point cloud, but point cloud makes a difference. The effectiveness of M2PT inspires us that for 3D vision research, which lacks huge-scale data for pretraining, M2PT introduces a promising direction to leverage irrelevant large-scale data from other modalities. ## 4 Related Work **Pretraining methods.** The evolution of unimodal pretraining paradigms has transitioned from supervised to self-supervised paradigms. For instance, (Devlin et al., 2019) introduced the mask-reconstruction paradigm and achieved remarkable outcomes. At that time, visual pretraining largely emphasized contrastive learning (Chen et al., 2020; He et al., 2020; Caron et al., 2021). Subsequently, leveraging the vast amounts of unlabeled data, the BERT paradigm gained traction and pioneers like MAE (He et al., 2022) successfully applied it to visual pretraining, while others (Pang et al., 2022; Gong et al., 2021a; Tong et al., 2022) extended this paradigm to areas like point cloud, audio, and video perception. On the other hand, multimodal pretraining methods (Wang et al., 2021a,b; 2022a,b) are developing rapidly, which commonly uses the cross-attention mechanism as a cornerstone or large-scale paired cross-modal data (Radford et al., 2021). In contrast, our method does not require any pairwise data but realizes cross-modal improvements with the original model structure which only comprises unimodal self-attention blocks. **Structural Re-parameterization** is a methodology that constructs extra structures (e.g., convolutional layers) during training and converts the trained structures via transforming the parameters. A primary drawback of Structural Re-parameterization is that the constructed layers must participate in the forward and backward computations, resulting in significant extra training costs. In contrast, Cross-Modal Re-parameterization is a simple re-parameterization method so that the only extra computation in the forward computation is adding up two weight matrices. ## 5 Conclusion and Limitation This paper aims to explore the feasibility and advantages of improving a model’s performance on a specific modality with irrelevant data from other modalities. We propose a general framework named Multimodal Pathway and a concrete inference-cost-free implementation named Cross-Modal Re-parameterization. Multimodal Pathway represents an early exploration in this direction, which offers a novel perspective. We realize significant and consistent improvements on four representative modalities, which demonstrates the potential of our method as a promising approach. The primary limitation is that the theory behind the improvements remains to be revealed. Apart from empirical explanations, we believe further investigations (e.g., a mathematically provable bound) will be useful, which may require a deeper understanding of the black box of deep neural networks. REFERENCES Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In ICML, volume 2, pp. 4, 2021. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, pp. 213–229. Springer, 2020. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9650–9660, 2021. Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv:1512.03012, 2015. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pp. 248–255. Ieee, 2009. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. URL https://openreview.net/forum?id=YicbFdNTTy. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 6202–6211, 2019. Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 776–780. IEEE, 2017. Yuan Gong, Yu-An Chung, and James Glass. Ast: Audio spectrogram transformer. arXiv preprint arXiv:2104.01778, 2021a. Yuan Gong, Yu-An Chung, and James Glass. Psla: Improving audio tagging with pretraining, sampling, labeling, and aggregation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3292–3306, 2021b. Yuan Gong, Cheng-I Lai, Yu-An Chung, and James Glass. Ssast: Self-supervised audio spectrogram transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 10699–10709, 2022. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In ICCV, pp. 2961–2969, 2017. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729–9738, 2020. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000–16009, 2022.
jVuknNhGmV
Definition 2 was hard to follow. Please write explicitly the integral in the expectation. It seems we are taking a random function, finding its best expected approximation in the Wasserstein sense (where the expectation is over the randomness of the function), then inverting this now deterministic function. is that right?
Causal Inference on Distributional Outcomes under Continuous Treatments Anonymous authors Paper under double-blind review Abstract Causal inference is widely practiced in various domains. Existing literature predominantly focuses on causal estimators for scalar or vector outcomes. However, real-world scenarios often involve response variables that are better represented as distributions. This paper addresses the need for causal inference methods capable of accommodating the distributional nature of responses when the treatments are continuous variables. We adopt a novel framework for causal inference within a vector space that incorporates the Wasserstein metric. Drawing upon Rubin’s causal framework, we introduce three estimators, namely the Distributional Direct Regression (Dist-DR), Distributional Inverse Propensity Weighting (Dist-IPW), and Distributional Doubly Machine Learning (Dist-DML) estimators, tailored for estimating target quantities, i.e., causal effect maps. We thoroughly examine the statistical properties of these estimators. Through two experiments, we validate the efficacy of the proposed methodology, establishing its practical utility. 1 Introduction The investigation of how treatments influence outcomes, known as causal inference, is a common practice across diverse domains, e.g., medical (Robins et al., 2000) and finance (Huang et al., 2021). To explore these effects, researchers have introduced and studied different causal estimators, such as the average treatment effect (ATE), the quantile treatment effect (QTE), and the conditional average treatment effect (CATE) (Chernozhukov & Hansen, 2005; Chernozhukov et al., 2018; Abrevaya et al., 2015; Hartman et al., 2015). However, all the aforementioned causal quantities that appear in the literature primarily center on scenarios where the realization of the outcome variable for each unit can be represented as a scalar or vector. However, there are many practical situations where the response for each unit should be described as a distribution. An illustrative example can be found in the investigation of the impact of working hours on individuals’ activity intensity behaviors. One’s activity intensities are typically recorded at regular intervals (e.g., 1 min), and these records collectively form an activity distribution that encapsulates an individual’s activity behavior. Notably, different users may exhibit various activity distributions. For instance, as depicted in Figure 1a, the activity intensity distributions of 10 users are displayed, each exhibiting distinct preferences for various activity intensities. Moreover, consider the scenario in Figure 1b, where two users (A and B) initially have the same activity intensity distribution with a mean of 30. Upon adopting treatments, User A increases intensity for all activities by 20 units, resulting in a rightward shift of the distribution by 20 units, while the shape remains unchanged. Consequently, the mean of the distribution increases from 30 to 50. On the other hand, User B only enhances intensity for high-intensity activities, leading to a significant transformation in the distribution’s shape. Nonetheless, the distribution’s mean remains at 50. In this context, focusing solely on scalar outcomes as causal quantities in literature, e.g., the mean of the activity intensity distribution, fails to reveal the distinct behavioral patterns of these two users. As such, there arises a need for causal inference methods that can account for the distributional nature of responses, enabling a more accurate characterization of treatment effects. This paper endeavors to fill this gap by exploring causal inference within a vector space encompassing a spectrum of distributions in scenarios featuring continuous treatment variables. We first equip such vector space with a suitable metric for quantifying dissimilarity between distributions. In contrast to the conventional Euclidean metric, which merely averages distributions pointwisely, we opt for the Wasserstein metric, renowned for preserving the inherent structure of random distributions more effectively. Grounded in Rubin’s foundational causal framework, we introduce three distinct estimators for target quantities, termed the causal effect map, which is analogous to the ATE in the classical causal framework. We comprehensively explore the statistical asymptotic properties that underlie these estimators. Subsequently, to empirically ascertain the efficacy of our proposed methodologies, we conduct two experiments, including one simulation and one real-world dataset. Our findings underscore the effectiveness of all three estimators. The contributions of this paper are threefold: - We introduce a novel non-parametric framework and three distinct cross-fitted estimators for inferring causal effects when the treatment variable takes continuous values. - We study the asymptotic properties characterizing the cross-fitted estimators, offering valuable insights into the statistical performance and reliability of the proposed estimators. - We perform two experiments to validate our proposed estimator, and the results from the numerical experiments are consistent with our theoretical findings. 2 RELATED WORK The key assumption of classical causal inference is that, given the treatment $A = a$, the realization of response variables for each unit is a scalar point drawing from the same potential outcome distribution. Under the assumption, several causal quantities are introduced and studied. For instance, ATE (Chernozhukov et al., 2018) is the difference between the means of any two potential outcome distributions (i.e., $\mathbb{E}[Y(A = \bar{a})] - \mathbb{E}[Y(A = a)]$). CATE is the mean difference of two potential outcomes in the total population conditioning on some covariates (Fan et al., 2022). Instead of studying the mean of potential outcome distribution, QTE (Chernozhukov & Hansen, 2005) focus on the difference between two potential outcome distributions at $\tau$-quantiles (i.e., $Q(\tau; Y(A = \bar{a})) - Q(\tau; Y(A = a))$). The general approach to estimating the causal effect between treatment and outcome is constructing the estimators for the target quantities. The simplest method, called the Direct Regression (DR) approach, is regressing the relation between the response and the features pair of treatment and covariates. However, the estimated relation from the observed dataset can be biased since the dataset is always not randomized. To address the issues, the inverse propensity weighting (IPW) method is introduced (Rosenbaum & Rubin, 1983; Hirano et al., 2003), aiming to formulate a pseudo-population and obtain the estimators for the target quantities in the pseudo-population. However, using the estimated extreme propensity score always gives the estimates with large variance. To overcome the challenges, the Doubly Machine Learning (DML) approach is proposed, which is endowed with a doubly robust property (Chernozhukov et al., 2018; Colangelo & Lee, 2019). The above methods are restricted when the outcome of each unit includes many observations or points, and they constitute a distribution. Thus, it is necessary to seek alternative frameworks for distributional outcomes. Indeed, the distribution can be treated as a special case of functional outcome and is closely related to the field of functional data analysis (FDA) that analyzes data under information varying over a continuum [Cai et al., 2022; Chen et al., 2016]. Specifically, Ecker et al. (2023) considers a causal framework to study the impact of treatment on the functional outcome. However, their work conducts causal inference in Euclidean space, in which the random structure of the distributional outcome can be destroyed [Verdinelli & Wasserman, 2019; Panaretos & Zemel, 2019]. As such, Lin et al. (2021) considers the causal study in the Wasserstein space, but they only consider the treatment variable taking binary values. We consider extending their framework to continuous treatment and propose three distinct estimators. We provide more detailed comparisons between our framework and classical framework in Appendix B. 3 BACKGROUND 3.1 NOTATIONS In this paper, we adopt the notation $A \in A \subset \mathbb{R}$ to denote the continuous treatment variable. The $m$-dimensional vector $\mathbf{X} = [X^1, \cdots, X^m] \in \mathcal{X}$ corresponds to the covariates/confoundings. The response variable is denoted as $\mathcal{Y}$, and we use $\mathcal{Y}(a)$ to signify the response variable associated with a specific value $a$. We assume that the realization of $\mathcal{Y}$ and $\mathcal{Y}(a)$ is a distribution instead of a scalar value. Specifically, we focus on a sub-case where the functional response corresponds to the cumulative distribution function (CDF) within the Wasserstein space denoted as $\mathcal{W}_2(\mathcal{I})$. We finally assume that there exist $N$ samples denoted as $(\mathbf{X}_s, A_s, \mathcal{Y}_s)_{s=1}^N$. 3.2 CAUSAL ASSUMPTIONS As with the previous studies [Rubin, 1978; 2005], our approach relies on four assumptions. They are (1) Stable Unit Treatment Unit Assumption, (2) Consistency, Ignorability, and (4) Overlap. We defer detailed assumptions about these assumptions in Appendix A. 3.3 WASSERSTEIN SPACE Given that the outcome in our paper is selected as the CDF, it becomes crucial to define a vector space that encompasses the CDF and establish an appropriate distance measure to compare and contrast two CDFs effectively. To begin, let $\mathcal{I} \subset \mathbb{R}$, we define the vector space $\mathcal{W}_p(\mathcal{I})$ that comprises CDFs defined on $\mathcal{I}$ and satisfying the condition: $$\mathcal{W}_p(\mathcal{I}) = \left\{ \lambda \text{ is a CDF on } \mathcal{I} \mid \int_{\mathcal{I}} t^p d\lambda(t) < \infty \right\}, \quad \text{where } p \geq 1.$$ Using Jensen’s inequality, we can conclude that $\left( \int_{\mathcal{I}} t^q d\lambda(t) \right)^{\frac{1}{q}} \leq \left( \int_{\mathcal{I}} t^p d\lambda(t) \right)^{\frac{1}{p}}$ when $1 \leq q \leq p$. Hence, $\mathcal{W}_p(\mathcal{I})$ contains all the CDF $\lambda$ with all the finite order central moment up to $p$-th order. We then establish a distance metric between two CDFs. The simplest measure that can be employed is the Euclidean $p$-measure, where the distance between two CDFs can be computed as the differences between the two CDFs point-wisely. Mathematically, given two CDFs $\lambda_1$ and $\lambda_2$ defined on $\mathcal{I}$, the Euclidean $p$-measure is $\left( \int_{\mathcal{I}} |\lambda_1(t) - \lambda_2(t)|^p dt \right)^{\frac{1}{p}}$. However, the Euclidean $p$-measure is not an optimal metric for characterizing the distance between two CDFs since averaging all the values of the distributions will destroy the structural properties of the resulting distribution, leading to a loss of essential characteristics. A concrete illustration of this issue is provided in Figure 2, which showcases ten distributions with distinct means and variances in the top plot. When these distributions are averaged using the Euclidean metric, the resulting green line in the bottom plot demonstrates that the bell shape characteristic of a normal distribution is not preserved. Figure 2: Examples for the average distribution of the 10 distributions using the Euclidean and Wasserstein metric. Apart from the usual Euclidean measure, we can also use the $p$-Wasserstein metric \cite{Villani2021, Panaretos2019, Feyoux2018}, which is defined as **Definition 1** Given two random variables $V_1$ and $V_2$, let the marginal CDFs of $V_1$ and $V_2$ be $\lambda_1$ and $\lambda_2$ that are defined on $\mathcal{I}$. Besides, let $\Lambda$ be the set containing all the joint densities of $V_1$ and $V_2$. The $p$-Wasserstein metric is given as $D_p(\lambda_1, \lambda_2)$ such that $$D_p(\lambda_1, \lambda_2) = \left\{ \inf_{\tilde{\lambda} \in \Lambda} \int_{\mathcal{I} \times \mathcal{I}} \gamma(s, t)^p d\tilde{\lambda}(s, t) \right\}^{\frac{1}{p}}. \quad (1)$$ In Definition 1, $\gamma(s, t)$ is a function such that $\gamma(s, t) : \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ satisfies the metric axioms: positivity axiom, symmetry axiom, and triangle inequality axiom. Moreover, $\gamma(\cdot, \cdot)$ represents the cost of transporting a point mass located at $s$ following the distribution $\lambda_1$ to $t$ following the distribution $\lambda_2$. As a result, the integral $\int_{\mathcal{I} \times \mathcal{I}} \gamma(s, t)^p d\lambda(s, t)$ represents the total cost of transporting points from $\lambda_1$ to $\lambda_2$ given that the joint distribution of $\lambda_1$ and $\lambda_2$ is $\tilde{\lambda}$. $D_p(\lambda_1, \lambda_2)$ is thus the minimum cost among all joint distributions of $(\lambda_1, \lambda_2)$. The vector space $\mathcal{W}_p(\mathcal{I})$ equipped with the metric $D_p(\cdot, \cdot)$ forms the $p$-Wasserstein space (denoted as $(\mathcal{W}_p(\mathcal{I}), D_p(\cdot, \cdot))$). Since the function $\gamma(s, t)$ in Definition 1 satisfies the metric axioms, the distance measures $D_p(\cdot, \cdot)$ also satisfies the metric axioms. Consequently, the $p$-Wasserstein space is indeed a metric space. In the sequel, we assume that $p = 2$ and $\gamma(s, t) = |s - t|$. One of the significant advantages of using the Wasserstein metric is its ability to preserve the structural properties of the distributions being averaged. As in Figure 2, the red line represents the average of all ten normal distributions computed using the Wasserstein metric, and it retains the shape of normal distributions. ### 4 Causal Quantities #### 4.1 Causal Map and Causal Effect Map Similar to the ATE, the target quantity in our paper is called causal effect map, which provides a comprehensive understanding of the treatment-response relationships. **Definition 2** The causal effect map $\triangle_{a\bar{a}}$ between treatments $a$ and $\bar{a}$ is defined as $$\triangle_{a\bar{a}} := \triangle_a - \triangle_{\bar{a}} := \mu_a^{-1} - \mu_{\bar{a}}^{-1}, \quad (2)$$ where $\mu_a := \arg\min_{v \in \mathcal{W}_2(\mathcal{I})} \mathbb{E}[D_2(Y(a), v)^2]$. We also term $\triangle_a$ as the casual map of treatment $a$. Here, the realization of $Y(a)$ is a distribution. The quantity $\mathbb{E}[D_2(Y(a), v)]$ represents the average Wasserstein distance centered at $v \in \mathcal{W}_2(\mathcal{I})$. As a result, the average Wasserstein distance centered at $\mu_a$ is the smallest, and it is commonly referred to as the Wasserstein barycenter. Notably, $\mu_a$ is a CDF, and thus $\mu_a^{-1}$ is the inverse function of CDF, which is also known as the quantile function. #### 4.2 Properties of Causal Map/Causal Effect Map From the previous section, we have shown that $\triangle_a = \mu_a^{-1}$ where $\mu_a := \arg\min_{v \in \mathcal{W}_2(\mathcal{I})} \mathbb{E}[D_2(Y(a), v)^2]$. The calculation $\triangle_a(\cdot)$ requires solving an optimization problem in the Wasserstein space. This optimization step can be computationally demanding, particularly when dealing with high-dimensional data or large sample sizes. To enhance the efficiency, we simplify the calculation of $\triangle_a(\cdot)$ and eliminate the optimization step. We conclude this point in Proposition 1. **Proposition 1** Given that Assumptions 1-4 hold, we have $\triangle_a = \mathbb{E}[Y(a)^{-1}]$. We defer the proof in Appendix C. $\mathbb{E}[Y^{-1}(a)]$ represents the expectation of the inverse CDF when all units in the population receive treatment $a$. 4.3 Estimators In practice, we often encounter situations where not all individuals receive treatment \(a\), and in some cases, no individuals receive treatment \(a\), especially when \(A\) is a continuous variable. To address this challenge and facilitate practical estimations from observed datasets, we further explore three alternative estimators of \(\mathbb{E}[Y^{-1}(a)]\), namely Distributional Direct Regression (Dist-DR) estimator, Distributional Inverse Propensity Weighting (Dist-IPW) estimator, and Distributional Doubly Machine Learning (Dist-DML) estimator. **Dist-DR estimator** can be obtained simply using Causal Assumptions (2) - (3). Indeed, we have \[ \Delta_a = \mathbb{E}[Y(a)^{-1}] = \mathbb{E}[\mathbb{E}[Y(a)^{-1}|X]] = \mathbb{E}[\mathbb{E}[Y(a)^{-1}|A = a, X]] = \mathbb{E}[\mathbb{E}[Y^{-1}|A = a, X]]. \tag{3} \] Here, \(*\) follows from Causal Assumption (2) while \(*\) follows from Causal Assumption (3). Let us define \(m_a(X) = \mathbb{E}[Y^{-1}|A = a, X]\) which is a regression function that can be estimated using any functional regression methods, e.g., Chen et al. (2016). As such, we obtain the Dist-DR estimator \(\Delta_{a;DR}\) using sample averaging such that \[ \Delta_{a;DR} = \frac{1}{N} \sum_{s=1}^{N} m_a(X_s). \tag{4} \] However, the Dist-DR estimator neglects the potential influence of the covariates \(X\) on the treatment variable \(A\) and is not suitable to construct estimators for causal analysis unless the observed dataset is random. Thus, we consider to express \(\mathbb{E}[Y(a)^{-1}]\) as other forms. **Dist-IPW estimator** uses the Horvitz–Thompson Theorem (Horvitz & Thompson, 1952; Overton & Stehman, 1995), and we can show that **Proposition 2** Given that Assumptions [1]-[4] hold, we have \[ \Delta_a = \mathbb{E}\left[\frac{\delta(A-a)}{p(a|X)} Y^{-1}\right]. \tag{5} \] Here, \(\delta(\cdot)\) is known as the Delta Dirac function. In Eqn. 5, the term \(\frac{\delta(A-a)}{p(a|X)}\) serves as the weight to construct a pseudo-population, where groups with a smaller portion in the dataset receive larger weights, while groups with a larger portion receive smaller weights. These weights are usually constructed using the (generalized) propensity scores, which capture the likelihood of receiving treatment based on covariates. We defer the proof in Appendix D. Unlike the Dist-DR estimator, we cannot directly construct estimators according to equation 5 using sample averaging due to the presence of the Delta Dirac function \(\delta(A-a)\). To overcome this, we usually replace the Delta Dirac function with some Kernel Approximations. **Definition 3 (Kernel function)** 1. Given that \(K(\cdot): \mathbb{R} \rightarrow \mathbb{R}\) is a symmetric function (i.e., \(K(v) = K(-v) \forall v \in \mathbb{R}\)). We say that \(K(\cdot)\) is a kernel function if it satisfies \(\int_{\mathbb{R}} K(v) dv = 1\). 2. A kernel function \(K(\cdot)\) is said to have order \(\nu\) (\(\nu\) is an even number) if \(\int_{\mathbb{R}} v^j K(v) dv = 0 \forall 1 \leq j \leq \nu - 1\) and \(\int_{\mathbb{R}} v^\nu K(v) dv < \infty\). In this paper, we concentrate on the second-order kernel function and present some commonly used second-order kernels in Appendix D. For any arbitrary kernel function \(K(x)\), we can define the scaled kernel with bandwidth \(h\). It is denoted as \(K_h(x)\) such that \(K_h(x) := \frac{1}{h} K\left(\frac{x}{h}\right)\). Due to the fact that \(\lim_{h \to 0} K_h(x) = \delta(x)\), we can replace \(\delta(A-a)\) in equation 5 with \(K_h(A-a)\), and we can then construct the Dist-IPW estimator \(\Delta_{a;IPW}\) using sample averaging such that \[ \Delta_{a;IPW} = \frac{1}{N} \sum_{s=1}^{N} \frac{K_h(A_s-a)}{p(a|X_s)} Y_s^{-1}. \tag{6} \] The Dist-DR estimator uses the nuisance parameter \( \mathbb{E}[Y^{-1}|A = a, X] \) only, while the Dist-IPW estimator uses the nuisance parameter \( p(a|X) \) only. Naturally, we can derive an estimator that requires both the nuisance parameters \( \mathbb{E}[Y^{-1}|A = a, X] \) and \( p(a|X) \). **Dist-DML estimator** is indeed developed from the Doubly Machine Learning Theorem as depicted in Chernozhukov et al. (2018). The theorem provides a powerful framework that combines the benefits of both the Dist-DR estimator and the Dist-IPW estimator. To start with, we show that \( \Delta_a \) can be expressed in terms of \( \mathbb{E}[Y^{-1}|A = a, X] \) and \( p(a|X) \) in Proposition 3. **Proposition 3** Denote \( m_a(X) = \mathbb{E}[Y^{-1}|A = a, X] \). Suppose that Assumptions 1–4 hold, we have \[ \Delta_a = \mathbb{E}\left[ m_a(X) + \frac{\delta_a(A)}{p(a|X)} [Y^{-1} - m_a(X)] \right]. \] (7) We defer the proof in Appendix E. Moreover, as the Dist-DR and Dist-IPW estimators, we can also estimate the Dist-DML estimator \( \hat{\Delta}_{a;DML} \) using sample averaging such that \[ \hat{\Delta}_{a;DML} = \frac{1}{N} \sum_{s=1}^{N} \left[ m_a(X_s) + \frac{K_h(A_s - a)}{p(a|X_s)} (Y_s^{-1} - m_a(X_s)) \right]. \] (8) The Dist-DML estimator possesses a valuable property known as **doubly robustness**, where equation (7) still hold even if either \( p(a|X) \) or \( m_a(X) \), but not both, are misspecified. We prove this property in Appendix F. Further, the estimation accuracy of \( m_a(\cdot) \) and \( p(a|X) \) can be reduced if the Dist-DML estimator is used in lieu of the Dist-DR estimator and the Dist-IPW estimator (see Theorem 2 in Appendix H). ### 4.4 ALGORITHM In the previous section, we have derived the estimators \( \hat{\Delta}_{a;DR}, \hat{\Delta}_{a;IPW}, \) and \( \hat{\Delta}_{a;DML} \). In order to obtain estimations of these estimators based on an observed dataset, we employ the cross-fitting technique, which can help mitigate the risk of over-fitting (Chernozhukov et al., 2018). In particular, we partition the \( N \) samples into \( K \) disjoint groups, where the \( k \)-th group is denoted as \( D_k \) and contains \( N_k \) samples, for all \( k = \{1, \ldots, K\} \). Let \( D_{-k} = \bigcup_{r=1,r \neq k}^{K} D_r \), and we use \( D_{-k} \) to learn the estimated nuisance parameters \( \hat{m}_a^k(X) \) and \( \hat{p}_k(a|X) \) of \( m_a(\cdot) \) and \( p(a|\cdot) \). We also suppose that the empirical estimation of \( Y \) is denoted as \( \hat{Y} \). Subsequently, we utilize \( D_k \) to compute \[ \hat{\Delta}_{a;DR}^k = \frac{1}{N_k} \sum_{s \in D_k} \hat{m}_a^k(X_s), \] (9) \[ \hat{\Delta}_{a;IPW}^k = \frac{1}{N_k} \sum_{s \in D_k} \frac{K_h(A_s - a)}{\hat{p}_k(a|X_s)} (\hat{Y}_s^{-1} - \hat{m}_a^k(X_s)), \] (10) \[ \hat{\Delta}_{a;DML}^k = \frac{1}{N_k} \sum_{s \in D_k} \left[ \hat{m}_a^k(X_s) + \frac{K_h(A_s - a)}{\hat{p}_k(a|X_s)} (\hat{Y}_s^{-1} - \hat{m}_a^k(X_s)) \right]. \] (11) Consequently, we can obtain the cross-fitted estimators \( \hat{\Delta}_{a;w} \) such that \[ \hat{\Delta}_{a;w} = \sum_{k=1}^{K} \frac{N_k}{N} \hat{\Delta}_{a;w}^k, \] (12) where \( w \in \{\text{Dist-DR, Dist-IPW, Dist-DML}\} \). We finally present an Algorithm that summarizes the procedures of getting the estimates of the cross-fitted estimators \( \hat{\Delta}_{a;w} \) in Appendix G. ### 5 THEORY In this section, we aim to study the asymptotic properties of the proposed estimator \( \hat{\Delta}_{a;w} \) for any \( w \in \{\text{Dist-DR, Dist-IPW, Dist-DML}\} \). Let \( X \) be a random variable with distribution \( F_X(x) \). Generally, we consider three types of \( L^2 \) space containing different forms of function: i) \( f : \mathcal{X} \to \mathbb{R} \); ii) \( g, \tilde{g} : [0, 1] \to \mathbb{R} \); and iii) \( \Gamma : \mathcal{X} \times [0, 1] \to \mathbb{R} \). For the second type of \( L^2 \) space, we can define an inner product \( \langle \cdot, \cdot \rangle \) such that \( \langle g, \tilde{g} \rangle = \int_{[0,1]} g(t)\tilde{g}(t)dt \) where \( \int_{[0,1]} |g(t)|^2 dt, \int_{[0,1]} |\tilde{g}(t)|^2 dt < \infty \). For each \( L^2 \) space, we have the corresponding norm: i) \( f(X) \) as \( \|f(X)\|_2^2 = \int_X |f(x)|^2 dF_X(x) = \mathbb{E}[|f(X)|^2] \); ii) \( \|g\|^2 = \int_{[0,1]} g(t)^2 dt \); and iii) \( \|f(X,t)\|^2 = \int_X \|f(x,t)\|^2 dF_X(x) \). We also let \( P_N \) be the empirical average operator such that \( P_N O = \frac{1}{N} \sum_{s=1}^N O_s \). We use \( \hat{m}_a^k(\cdot) \) and \( \hat{\tilde{m}}_a^k(\cdot) \) to denote the estimates of \( m_a(\cdot) \) using the outcome \( Y \) and \( \tilde{Y} \) based on the set \( D_{-k} \) respectively. Simultaneously, let \( \rho_m^4 = \sup \{ \| \hat{m}_a^k - m_a \|_4, a \in A \} = \sup \{ \| \hat{\tilde{m}}_a^k(x) - m_a(x) \|_4 dF_X(x) \}, a \in A \} \) for \( 1 \leq k \leq K \) and define \( \rho_p^4 = \sup_{a \in A} \mathbb{E}[\hat{p}_k(a|X) - p(a|X)]^4 \). Finally, we present the convergence assumptions that are required in studying the asymptotic properties of the proposed estimators. **Convergence Assumption 1** The estimates \( \hat{Y}_1, \cdots, \hat{Y}_N \) are independent of each other. Further, there are two sequences of constants \( \alpha_N = o(N^{-\frac{1}{2}}) \) and \( \nu_N = o(N^{-\frac{1}{2}}) \) (note that \( o(N^{-\frac{1}{2}}) \) implies \( o(1) \) automatically) such that \[ \sup_{1 \leq s \leq N} \sup_{v \in W(T)} \mathbb{E}[\hat{D}_2^2(\hat{Y}_s, Y_s)|Y_s = v] = O(\alpha_N^2) \quad \text{and} \quad \sup_{1 \leq s \leq N} \sup_{v \in W(T)} \mathbb{V}[\hat{D}_2^2(\hat{Y}_s, Y_s)|Y_s = v] = O(\nu_N^2). \] **Convergence Assumption 2** \( \forall a \in A \) and \( \forall 1 \leq k \leq K \), we have 1. \( \sup_{x \in X} \| \hat{m}_a^k(x) - m_a(x) \| = o_P(1) \) and \( \sup_{x \in X} \| \hat{p}_k(a|x) - p(a|x) \| = o_P(1) \). 2. \( \| \hat{m}_a^k - \hat{\tilde{m}}_a^k \| = O_P(N^{-1} + \alpha_N^2 + \nu_N^2) \). 3. There exist constants \( c_1 \) and \( c_2 \) such that \( 0 < c_1 \leq \frac{N}{N} \leq c_2 < 1 \) for all \( N \). In Theorem 1, we only present the asymptotic properties of \( \hat{\Delta}_{a:DML} \). For other cases, we defer the asymptotic studies to the Appendix H. **Theorem 1** Suppose that \( p(a|x) \in C^3 \) on \( A \) such that the derivatives (including 0-th order derivative) are bounded uniformly in the sample space for any \( x \). Further, we assume that \( \mathbb{E}[Y^{-1}|A = a, X] \in C^3 \) on \([0,1] \times A\) and \( \mathbb{E}[\|Y^{-1}\||A = a, X] \in C^3 \) on \( A \) which are bounded uniformly in the sample space. If \( h \to 0, Nh \to \infty, \) and \( Nh^5 \to C \in [0, \infty) \), then, under the convergence assumptions, we have \[ \sqrt{Nh}(\hat{\Delta}_{a:w} - \Delta_a) = \sqrt{Nh} \left[ P_N [\varphi(A,X,Y)] - \Delta_a \right] + o_P(1), \] where \( \varphi(A,X,Y) = \frac{K_h(A-a)(Y^{-1}-m_a(X))}{p(a|X)} + m_a(X) \) if \( w = DML \) and \( \rho_m \rho_p = o(N^{-\frac{1}{2}}), \rho_m = o(1), \rho_p = o(1) \). Additionally, \( \sqrt{Nh} \{\hat{\Delta}_{a:w} - \Delta_a - h^2 B_a \} \) converges weakly to a centred Gaussian process in \( L^2([0,1]) \) where \( B_a = (\int u^2 K(u)du) \times \left( \mathbb{E}\left[ \frac{\partial_a m_a(X)\partial_a p(X)}{p(a|X)} \right] + \frac{1}{2} \mathbb{E}[\partial_{aa} m_a(X)] \right) \). We also defer the proofs of Theorem 1 to the Appendix H. Note that if estimators are constructed from the Dist-DML form, the accuracy in estimating nuisance parameters can be relaxed. We only require that \( \rho_m \rho_p \) equals \( o(N^{-\frac{1}{2}}) \). For example, we can have both \( \rho_m \) and \( \rho_p \) equal \( o(N^{-\frac{1}{2}}) \) if the Dist-DML estimator is used, but we must have \( \rho_m \) and \( \rho_p \) equal \( o(N^{-\frac{1}{2}}) \) if either the Dist-DR estimator or the Dist-IPW estimator is used (see Appendix H). ### 6 Simulation Experiment To validate our theoretical results, we conduct a simulated experiment where the treatment variable \( A \) takes continuous values. The outcome \( Y_s^{-1} \) for each unit is simulated as \[ Y_s^{-1}(A_s) = c + (1-c)(\mathbb{E}[A_s] + \exp(A_s)) \times \sum_{j=1}^{m} \frac{\exp(X_s^{2j-1}X_s^{2j})}{\sum_{k=1}^{m} \exp(X_s^{2k-1}X_s^{2k})} B^{-1}(\alpha_j, \beta_j) + \epsilon_s. \] Table 1: The experiment results for three estimators on treatment $A = 0.00$. The reported values are averages across 100 experiments, with Std. in parentheses. The best results are highlighted in bold. | | Q=0.1 | Q=0.2 | Q=0.3 | Q=0.4 | Q=0.5 | Q=0.6 | Q=0.7 | Q=0.8 | Q=0.9 | Error | |----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | Ground | 0.0112| 0.0462| 0.1083| 0.2271| 0.5026| 0.7782| 0.8970| 0.9591| 0.9941| | | Dist-DR | 0.0101| 0.0364| 0.1412| 0.3009| 0.4917| 0.6879| 0.8561| 0.9609| 0.9670| (0.0050)| | Dist-DR-MAE | 0.0011| 0.0099| 0.0329| 0.0738| 0.0109| 0.0903| 0.0409| 0.0019| 0.0271| 0.0321| | Dist-IPW | 0.0714| 0.0557| 0.1240| 0.2424| 0.4817| 0.7064| 0.8190| 0.8809| 0.9293| (0.0004)| | Dist-IPW-MAE | 0.0041| 0.0095| 0.0158| 0.0153| 0.0210| 0.0718| 0.0780| 0.0781| 0.0648| 0.0398| | Dist-DML | 0.0080| 0.0589| 0.1353| 0.2658| 0.5195| 0.7591| 0.8846| 0.9547| 1.0039| (0.0006)| | Dist-DML-MAE | 0.0032| 0.0127| 0.0270| 0.0387| 0.0169| 0.0190| 0.0124| 0.0044| 0.0098| 0.0160| Figure 3: The inverse CDF of 5 simulated units. Figure 4: The estimated quantile function when $A = 0.00$ from Dist-DR (left), Dist-IPW (middle), and Dist-DML (right) methods. Here, $m$ is an even number that indicates the number of covariates. $B^{-1}(\alpha, \beta)$ is the inverse CDF of Beta distribution with the shapes’ parameters $\alpha$ and $\beta$. We choose Beta distributions since they vary widely given different parameters. The constant $c$ controls the strength of the causal relationship between $A_s$ and $Y_s^{-1}$. $\epsilon_s$ is the noise that follows $\mathcal{N}(0, 0.05)$. Then, the treatment $A_s$ for each unit is generated by $$A_s \sim \mathcal{N}(\gamma^\top X_s, \log(1 + \exp(\delta^\top X_s))).$$ Since the ground truth outcome and the predicted outcome are functions, we thus discretize them and compare the mean absolute error (MAE) between ground truth outcome $\Delta_a$ and estimated causal effect map $\hat{\Delta}_a$ on 9 quantiles with levels ranging from 0.1 to 0.9. We repeat the experiment 100 times to report the mean and standard deviation of MAE. **Experiment Settings** We choose $m = 10$ such that $X^1, X^2 \sim \mathcal{N}(-2, 1), X^3, X^4 \sim \mathcal{N}(-1, 1), X^5, X^6 \sim \mathcal{N}(0, 0.1), X^7, X^8 \sim \mathcal{N}(1, 1),$ and $X^9, X^{10} \sim \mathcal{N}(2, 1)$. Within each unit, 100 observations are generated in accordance with equation 14a using the inverse transform sampling technique. In total, 5,000 units are generated. Figure 3 offers a visual representation of 5 simulated instances, showcasing the variability in outcome functions across different units. We first estimate the functional regression $\hat{m}_a(X_s)$ by regressing $\tilde{Y}^{-1}$ w.r.t. $(A, X)_s$. Then, conventional methods might assume a specific form for $p(a|X_s)$, such as a linear form (Su et al., 2019), or employ kernel-based techniques (Colangelo & Lee, 2019). We adopt a generative approach to estimate the density function, drawing inspiration from Grathwohl et al. (2019). **Experiment Results** We conduct the experiment across three distinct treatment levels: $A = -0.05$, $A = 0.00$, and $A = 0.05$. The true outcome distribution is computed using DGP equation 14a and 14b, with the corresponding results displayed in the first row of Table 1 for $A = 0.00$. Subsequently, we list the estimation results (mean, std., and MAE) produced by the Dist-DR, Dist-IPW, and Dist-DML estimators. We list the results for $A = -0.05$ and $A = 0.05$ in Appendix I. We also plot the ground truth and recovered quantile function in Figure 4. Overall, all estimators are effective in recovering the true outcome distribution. Nonetheless, the Dist-DR estimator yields the largest MAE, and the Dist-IPW estimator offers improved estimations but demonstrates the highest variance. These results are in line with our theoretical analysis. contrast, the Dist-DML estimator can correct most of the bias in the Dist-DR estimator and the variance in the Dist-IPW estimator, resulting in more accurate and robust estimates. 7 Empirical Application We employ our approach to investigate the causal impact of working hours on physical activity intensity based on a public dataset named the National Health and Nutrition Examination Survey (NHANES), which aims to evaluate the health of people in the United States. The dataset includes demographics, diet, socioeconomics, medical, physiological assessments, and laboratory tests of participants. The physical activity intensity is recorded for successive 1-minute intervals, which constitutes a specific distribution for each person, and we measure it by empirical CDF. After data preprocessing, we obtain 2,762 participants. We use the Dist-DML estimator to estimate the causal map, which performs the best in the simulation experiment. We run the experiments 50 times. In each experiment, the estimator is computed 2-fold. Detailed data and statistical descriptions, data preprocessing, and the training details are given in Appendix I. Figure 5 presents the empirical findings. The lines correspond to the causal map illustrating the distribution of activity intensity at quantiles 0.1, 0.3, 0.5, 0.7, and 0.9 across a range of working hours spanning from 0 to 80 hours per week. The shaded bands represent the 50% and 95% confidence intervals for our estimations. In general, in the context of regular-level activity intensity (e.g., quantiles lower than 0.7), such as activities like walking and jogging, our analysis reveals a consistent pattern: an increase in working hours is associated with a decrease in activity intensity. This phenomenon can be attributed to the fact that longer working hours tend to displace available time for physical exercise. Conversely, when we focus on high-intensity activities (i.e., activity intensity beyond the 0.9 quantile), our observations suggest an opposite relationship. Specifically, an increase in working hours results in heightened activity intensity. This phenomenon can be attributed to the observation that individuals exhibiting higher levels of activity intensity typically engage in manual labor occupations. Thus, an expansion of working hours among such individuals invariably results in an elevation of their activity intensity levels. 8 Conclusion In this paper, we present a novel approach to conducting causal inference in the Wasserstein space, departing from the conventional practice in the Euclidean space. By leveraging Rubin’s causal framework, we introduce three estimators: the Dist-DR, Dist-IPW, and Dist-DML estimators, enabling the investigation of the causal impact of continuous treatments on distributional outcomes. Furthermore, we have conducted a comprehensive study of the statistical properties of these estimators, providing valuable theoretical insights. To validate our theoretical findings, we conduct two experiments, one simulation experiment and one empirical application. The results of our study demonstrate the enhanced performance of the Dist-DML estimator. Future research includes i) extending the investigation to other causal estimators, such as ATTE and CATE; ii) exploring the application of this methodology in various domains, including but not limited to healthcare, business, and social sciences. REFERENCES Jason Abrevaya, Yu-Chin Hsu, and Robert P Lieli. Estimating conditional average treatment effects. *Journal of Business & Economic Statistics*, 33(4):485–505, 2015. Xiong Cai, Liugen Xue, Jiguo Cao, and Alzheimer’s Disease Neuroimaging Initiative. Robust estimation and variable selection for function-on-scalar regression. *Canadian Journal of Statistics*, 50(1):162–179, 2022. Yakuan Chen, Jeff Goldsmith, and R Todd Ogden. Variable selection in function-on-scalar regression. *Stat*, 5(1):88–101, 2016. Victor Chernozhukov and Christian Hansen. An iv model of quantile treatment effects. *Econometrica*, 73(1):245–261, 2005. Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James Robins. Double/debiased machine learning for treatment and structural parameters. *Econometrics Journal*, 21(1), 2018. Kyle Colangelo and Ying-Ying Lee. Double debiased machine learning nonparametric inference with continuous treatments. Technical report, Centre for Microdata Methods and Practice, Institute for Fiscal Studies, 2019. Kreske Ecker, Xavier de Luna, and Lina Schelin. Causal inference with a functional outcome. *arXiv preprint arXiv:2304.07113*, 2023. Qingliang Fan, Yu-Chin Hsu, Robert P Lieli, and Yichong Zhang. Estimation of conditional average treatment effects with high-dimensional data. *Journal of Business & Economic Statistics*, 40(1):313–327, 2022. Nelson Feyeux, Arthur Vidard, and Maëlle Nodet. Optimal transport for variational data assimilation. *Nonlinear Processes in Geophysics*, 25(1):55–66, 2018. Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, and David Duvenaud. FFJORD: free-form continuous dynamics for scalable reversible generative models. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id=rJxqknCcK7. Erin Hartman, Richard Grieve, Roland Ramsahai, and Jasjeet S Sekhon. From sample average treatment effect to population average treatment effect on the treated: combining experimental with observational studies to estimate population treatment effects. *Journal of the Royal Statistical Society Series A: Statistics in Society*, 178(3):757–778, 2015. Keisuke Hirano, Guido W Imbens, and Geert Ridder. Efficient estimation of average treatment effects using the estimated propensity score. *Econometrica*, 71(4):1161–1189, 2003. Daniel G Horvitz and Donovan J Thompson. A generalization of sampling without replacement from a finite universe. *Journal of the American statistical Association*, 47(260):663–685, 1952. Yiyan Huang, Cheuk Hang Leung, Xing Yan, Qi Wu, Nanbo Peng, Dongdong Wang, and Zhixiang Huang. The causal learning of retail delinquency. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 204–212, 2021. Zhenhua Lin, Dehan Kong, and Linbo Wang. Causal inference on distribution functions. *arXiv preprint arXiv:2101.01599*, 2021. W Scott Overton and Stephen V Stehman. The horvitz-thompson theorem as a unifying perspective for probability sampling: with examples from natural resource sampling. *The American Statistician*, 49(3):261–268, 1995. Victor M Panaretos and Yoav Zemel. Statistical aspects of wasserstein distances. *Annual review of statistics and its application*, 6:405–431, 2019.
tFYcEUlUTt
From the paper and figures I assume that the predictor outputs a series of temporally coarse frames, which are then predominantly used to refine the first prediction. The procedure is then repeated with this predicted and refined first prediction. To me, this means that $L>1$ predictions are made to advance the simulation by one frame. Is this the case? This should also be clarified in the paper.
LEARNING FROM THE FUTURE: IMPROVE LONG-TERM MESH-BASED SIMULATION WITH FORESIGHT Anonymous authors Paper under double-blind review ABSTRACT This paper studies the problem of learning mesh-based physical simulations, a crucial task with applications in fluid mechanics and aerodynamics. Recent works typically utilize graph neural networks to produce next-time states on irregular meshes by modeling interacting dynamics, and then adopt iterative rollouts for the whole trajectories. However, these methods cannot achieve satisfactory performance in long-term predictions due to the failure of capturing long-term dependency and potential error accumulation. To tackle this, we introduce a new future-to-present learning perspective, and further develop a simple yet effective approach named Foresight And InteRpolation (FAIR) for long-term mesh-based simulations. The main idea of FAIR is to first learn a graph ODE model for coarse long-term predictions and then refine short-term predictions via interpolation. Specifically, FAIR employs a continuous graph ODE model that incorporates past states into the evolution of interacting node representations, which is capable of learning coarse long-term trajectories under a multi-task learning framework. Then, we leverage a channel aggregation strategy to summarize the trajectories for refined short-term predictions, which can be illustrated using an interpolation process. Through pyramid-like alternative propagation between the foresight step and refinement step, FAIR can generate accurate long-term trajectories, achieving an error reduction of up to 25.4% on benchmark datasets. Extensive ablation studies and visualization further validate the superiority of the proposed FAIR. 1 INTRODUCTION Physics simulations are of paramount importance for understanding fundamental principles in various domains, including mechanics [Ren & Tang, 2010; Wang et al., 2022], electromagnetics [Pardo et al., 2007], biology [Wang & Wu, 2021] and acoustics [Marburg & Nolte, 2008]. The majority of studies utilize mesh-based finite element systems to describe complicated physics by simulating the interactions of mesh points. To achieve the optimal use of resource budgets for unstructured surfaces, they usually allocate greater resolution to regions of interest where more accurate analysis is expected, resulting in complicated irregular mesh structures [Guskov et al., 2002; Liu et al., 2022a; Dong et al., 2023; Liang et al., 2022]. Traditional numerical solvers usually require a heavy computational burden, and thus efficient data-driven simulators have drawn ever-lasting interest recently. With the rapid development of deep learning techniques, several data-driven simulators have been recently proposed to learn numerical simulations on structured grids [Fotiadis et al., 2020; Kim et al., 2019; Tompson et al., 2017; Rao et al., 2023; Cao et al., 2023]. To adapt to irregular meshes, graph machine learning-based approaches have received more attention gradually [Pfaff et al., 2021; Xu et al., 2021; Shao et al., 2022; Sanchez-Gonzalez et al., 2020]. The majority of them first construct a geometric graph where mesh points are considered as nodes and utilize graph neural networks (GNNs) to model the interacting dynamics in physical systems. In particular, they follow the paradigm of message passing [Kipf & Welling, 2017; Xu et al., 2019; Wu et al., 2019; Alet et al., 2019; Yu et al., 2018; Yang et al., 2022; Zhang et al., 2021], which aggregates edge information from the neighbors of each node to update the node representation in a progressive fashion. In reality, long-term forecasting [Nie et al., 2022; Zhou et al., 2022a; Zhao et al., 2020; Lan et al., 2022; Wu et al., 2021] is a practical yet challenging scenario for physics simulations. Existing methods [Pfaff et al., 2021; Shao et al., 2022; Cao et al., 2023] usually rely on an autoregressive strategy, which utilizes the current states for the next-time predictions and then feeds them back as input in an iterative manner. However, these one-step predictors often struggle to capture the long-term system dynamics, which could be governed by underlying partial differential equations (PDEs) \cite{Vadeboncoeur2023}. Additionally, they are prone to accumulating errors over iterative rollouts, which degrades the performance of their long-term predictions. Given these accumulated errors, it is highly anticipated to include future states into the prediction procedure to provide a foresight of systems, which not only enhances long-term predictions, but also infers extra knowledge connected with underlying PDEs to refine the short-term predictions. Towards this end, we provide a new perspective that learns from the future for present predictions (see Figure 1), and propose a simple yet effective approach named Foresight And Interpolation (FAIR) for long-term mesh-based simulations. In particular, FAIR takes a two-stage learning paradigm which first generates coarse long-term predictions using a continuous graph ordinary differential equation (ODE) model, and then refines these into accurate short-term predictions through interpolation. In the first stage, we extend neural ODEs \cite{Chen2018, Norcliffe2021} into graph ODE twins to provide coarse foresight, which models the evolution of both mesh node and edge representations using the neighborhood aggregation mechanism. To enhance the capacity to capture non-linear complex patterns, we incorporate historical states to augment the current embedding in ODEs. Our graph ODE model has the flexibility to generate various predictions of different steps ahead, which is optimized for a multi-task learning framework \cite{Momma2022}. In summary, our model can not only accord with the continuous nature of real-world systems, but also capture long-term dependencies with limited error accumulation. In the second stage, we employ a channel aggregation strategy to summarize the future predictions and current observations for interpolation, which is followed by further neighborhood aggregation in the observation space to refine the short-term predictions. More importantly, through the alternative foresight step and refinement step, our FAIR can provide refined long-term trajectories. We validate the effectiveness of FAIR via extensive experiments on four benchmark datasets, and our FAIR achieves significant improvements over various baseline models. In particular, our proposed FAIR achieves an error reduction up to 25.4% on CylinderFlow compared with the best baseline. In summary, the contributions of this paper are three-fold: (1) Innovative Perspective. We introduce a new future-to-present perspective for mesh-based simulations, which can effectively capture long-term dynamics with minimal error accumulation. (2) Novel Methodology. Our two-stage FAIR first produces coarse long-term predictions using a continuous graph ODE model, and then refines these into accurate short-term predictions through interpolation. These two steps are conducted alternatively for refined long-term predictions. (3) High Performance. Extensive experiments on four benchmark datasets demonstrate the superiority of FAIR over existing approaches. 2 RELATED WORK Learning-based Physics Simulations. Physics simulations can be advantageous in science when model parameters or boundary conditions are insufficient. Due to their great efficacy, learning-based physics simulations are gaining popularity in a variety of domains such as computational fluid dynamics \cite{Yang2017, Guo2016, Qiao2020, Maulik2021, Subh2022, Takamoto2023}. Initially, convolutional neural network-based methods are often built to learn from regular grids \cite{Peng2020}. Recently, MeshGraphNet \cite{Pfaff2021} makes an attempt to incorporate GNNs into learning mesh-based physics simulations on irregular meshes, followed by several extensions \cite{Shao2022, Sanchez-Gonzalez2020, Cao2023}. However, these algorithms often generate next-step predictions using current states, which fails to make accurate long-term predictions. \cite{Han2022} adopts an encoder-decoder structure... to compress data for long sequence modeling with Transformer. In contrast, our proposed FAIR provides a future-to-present perspective to capture long-term dynamics using neural ODE-based models. **Long-term Forecasting.** Based on historical observations, long-term forecasting (Nie et al., 2022; Zhao et al., 2020; Lan et al., 2022; Wu et al., 2021; Dendorfer et al., 2022; Malhan & Mittal, 2022; Zhou et al., 2022b; Mangalam et al., 2021) aims to make predictions for a long horizon with various applications with weather forecasting (Bi et al., 2023; Zhang et al., 2023) and economic analysis (Chudik et al., 2021). A range of Transformer-based architectures (Zhou et al., 2022a,b; Liu et al., 2022b; Tang & Matteson, 2021) have been introduced for long-term forecasting, which can get rid of gradient vanishing and exploding in recurrent neural networks (RNNs) (Salinas et al., 2020). These approaches usually focus on modeling single-agent systems. In contrast, we target a less-explored and challenging problem of long-term interacting dynamics forecasting (Kofinas et al., 2021) and propose an approach FAIR, which learns coarse long-term trajectories using a graph ODE model to provide future information for short-term refinement. ### 3 Preliminaries #### 3.1 Problem Formulation We aim to learn a neural simulator that uses neural operations to approximate the ground-truth physics dynamics on irregular meshes, which are usually driven by underlying PDEs. The dynamic system with both spatial correlations can be characterized using a mesh graph \( G = (\mathcal{V}, \mathcal{E}) \) with a set of mesh nodes \( \mathcal{V} \) and an edge set \( \mathcal{E} \). Given current states of mesh points, i.e., \( X^{t_0} \in \mathbb{R}^{N \times F} \) and (optionally) a series of historical states, i.e., \( \{X^{t_0-1}, \ldots, X^{t_0-\tau}\} \), the objective is to predict the future trajectories for all nodes \( X^t \in \mathbb{R}^{N \times F} \) (\( t_0 < t \leq t_0 + T \)), where \( F \) is the attribute dimension and \( N \) is the number of mesh nodes. We use prediction errors w.r.t. the ground truth to evaluate the performance. #### 3.2 Graph Neural Networks (GNNs) GNNs are a class of neural networks that operate directly on graph-structured data (Wei et al., 2022). They have been extensively studied for approximating pairwise interactions in mesh-based physical systems (Pfaff et al., 2021; Shao et al., 2022; Sanchez-Gonzalez et al., 2020). GNNs usually follow the paradigm of message passing (Kipf & Welling, 2017; Velickovic et al., 2018; Xu et al., 2019), where edge information is updated by its connected nodes and then neighborhood information is aggregated to update the node representations. Through this, GNNs can capture the complex interaction among mesh points, which reveals how the system changes from time step \( t_0 \) to time step \( t_0 + 1 \) (Cao et al., 2023; Yildiz et al., 2022; Look et al., 2023). #### 3.3 Neural Ordinary Differential Equations (ODEs) A complex physical system can be described by a series of coupled nonlinear ordinary differential equations (Shao et al., 2022; Han et al., 2022): \[ \frac{d h_i^t}{dt} = \Phi(h_1^t, h_2^t, \cdots, h_N^t), \] where \( h_i^t \) is the state for object \( i \) at time step \( t \) and \( \Phi(\cdot) \) is a function for capturing the interaction among objects, which can be a neural network automatically learned from data (Yoon et al., 2022; Huang et al., 2020; Chen et al., 2018; Huang et al., 2021). Given the initial states \( h_1^{t_0}, h_2^{t_0}, \cdots, h_N^{t_0} \) for all objects, the latent states of trajectories at arbitrary time steps can be calculated with a black-box ODE solver as follows: \[ h_i^t = h_i^{t_0} + \int_{s=t_0}^{t} \Phi(h_1^s, h_2^s, \cdots, h_N^s) ds. \] We model mesh-based physical system dynamics using neural ODEs in the latent space, with GNN as the ODE function \( \Phi \) to model the continuous interaction among mesh points. The latent initial states \( h_1^{t_0}, h_2^{t_0}, \cdots, h_N^{t_0} \) are computed via an encoder and the decoder recovers the whole trajectory \( X^t \) (\( t_0 < t \leq T \)) based on the latent states at each time step. Figure 2: An overview of the proposed FAIR. FAIR adopts an MPNN-based encoder to generate node representations, which are fed into graph ODE to generate predictions at different timestamps. These predictions are aggregated with channel attention to refine the short-term predictions. These foresight and refinement steps are conducted alternatively for accurate long-term predictions. 4 THE PROPOSED FAIR In this paper, we study the problem of long-term mesh-based simulations and introduce a new approach named FAIR from a future-to-present perspective. Existing methods (Pfaff et al., 2021; Xu et al., 2021; Shao et al., 2022) usually utilize the current states for next-time predictions, followed by iterative rollouts to predict whole trajectories while our FAIR takes a future-to-present perspective, allowing the model to maintain foresight throughout the evolution. Specifically, FAIR incorporates a continuous graph ODE model that is enhanced with past states, which generates coarse long-term trajectories under a multi-task learning framework. To improve the accuracy of our short-term predictions, we employ a channel aggregation strategy, which refines the trajectory through an interpolation process. Finally, we adopt pyramid-like propagation for the whole trajectories. An overview of FAIR can be found in Figure 2 and we will elaborate on the details as follows. 4.1 COARSE FORESIGHT WITH GRAPH ODE TWINS The key insight of our FAIR is to learn from long-term trajectories. As a preliminary step, it is crucial to generate high-quality long-term trajectories based on historical predictions. Instead of using inefficient iterative rollouts (Pfaff et al., 2021; Cao et al., 2023), we follow the idea of neural ODEs (Chen et al., 2018) to model the continuous evolution of both mesh nodes and edges within dynamic systems. To learn the evolution between mesh points, we extend neural ODEs into graph ODE twins, which employ a neighborhood aggregation mechanism to update the representations of both nodes and edges, which are flexible to produce outputs at any given timestamp. In particular, FAIR leverages an encoder-ODE-decoder architecture where both the encoder and decoder components are built upon message passing neural networks (MPNNs). The effectiveness of the graph ODE twins is further enhanced by augmenting latent states with historical data, thereby improving the capability to capture evolving patterns under potential noise. MPNN-based Encoder. To begin with, we first generate state representations for mesh nodes and their associated edge using a message passing mechanism (Kipf & Welling, 2017). Specifically, both node and edge embeddings are initialized using feed-forward networks (FFNs) as follows: \[ v_i^{(0)} = f^n(x_i), \quad e_{ij}^{(0)} = f^e(p_i - p_j), \] where \( x_i \) and \( p_i \) are the feature and position vectors of node \( i \), respectively. \( f^n(\cdot) \) and \( f^e(\cdot) \) are implemented by two FFNs for nodes and edges, respectively. Then, we stack a range of MPNN layers to learn semantics from geometric graphs in an iterative manner. The updating rule for each node \( i \) can be summarized as follows: \[ v_i^{(l+1)} = \psi^n(v_i^{(l)}, \sum_{j \in N(i)} e_{ij}^{(l)}), \quad e_{ij}^{(l+1)} = \psi^e(v_i^{(l)}, v_j^{(l)}, e_{ij}^{(l)}), \] where \( v_i^{(l)} \) and \( e_{ij}^{(l)} \) are the node and edge embeddings at the layer \( l \), respectively. \( N(i) \) denotes the neighbors of node \( i \). \( \psi^e \) and \( \psi^n \) are two FFNs for feature transformations. After stacking \( L \) layers, we can generate discriminative node end edges representations for the current timestamp $t_0$, i.e., $v_{t_0}^{(L)} = v_i^{(L)}$ and $e_{t_0}^{(L)} = e_{ij}^{(L)}$ for the subsequent generative model. **Graph ODE.** Neural ODEs are commonly used to model dynamical systems with continuous evolution, which can output flexible predictions at any given timestamps. While previous approaches often integrate neighborhood information into ODEs to model interacting dynamics (Huang et al., 2020; Gupta et al., 2022), they typically fall short in explicitly capturing the evolving dynamics of edges. To address this gap, we introduce a propagator named graph ODE twins, which adopts separate neural ODEs to model the evolution of both nodes and edges. Furthermore, we notice that data-driven prediction models (Huang et al., 2020; 2021) often struggle to accurately deduce continuous evolution solely based on the current state. To mitigate this, we incorporate historical states as supplementary data in our graph ODE model. In practice, we use them to augment the current embeddings in ODEs, thereby enhancing the capacity to capture the evolving dynamics as well. In formulation, we generate augmented embeddings at the timestamp $t$ as: $$\tilde{v}_i^t = \begin{bmatrix} v_i^t \\ v_i^{t-1} \end{bmatrix}, \tilde{e}_{ij}^t = \begin{bmatrix} e_{ij}^t \\ e_{ij}^{t-1} \end{bmatrix},$$ where $v_i^{t-1}$ and $e_{ij}^{t-1}$ are from the last timestamps. Then, we model the evolution using both augmented node embeddings and edge embeddings based on the following ODEs: $$\frac{dv_i^t}{dt} = \Phi^n(\tilde{v}_i^t, \sum_{j \in N(i)} \tilde{e}_{ij}), \quad \frac{de_{ij}^t}{dt} = \Phi^e(\tilde{e}_{ij}, \tilde{v}_i^t, \tilde{v}_j^t),$$ where $\Phi^n$ and $\Phi^e$ are implemented by two FFNs. Through a standard ODE solver, we are able to output the hidden embeddings for the future timestamps ranging from $t_1$ to $t_L$ at one time with the step size $r$ and the number of predictions $L$ (i.e., $t_l = t_0 + rl - r + 1$). Different $r$ indicates a different horizon of predictions. Our graph ODE model is a special case of delay differential equations (DDEs) (Balachandran et al., 2009), i.e., $\frac{dv_i^t}{dt} = \phi(v_i^t, v_i^{t-\tau}, t)$, which has been shown to have an improved capacity for capturing non-linear dynamics (Zhu et al., 2021). We further provide a theorem to show that our graph ODE has a unique absolutely continuous solution. To begin, denote $y^t = (v_1^t, \ldots, v_N^t, e_{12}^t, \ldots, e_{N-1,N}^t)$, and then our system can be represented as: $$\begin{cases} \frac{dy^t}{dt} = \Phi(y^t, y^{t-1}), & t \in [t_0, t_0 + T] \\ y(t) = y(t_0 - 1), & t \in [t_0 - 1, t_0), \end{cases}$$ where we add the definition of $y$ in the interval $[t_0 - 1, t_0]$ to make sure our system is well-defined. **Lemma 4.1.** Suppose we have an FFN $\Phi$ with all absolute values of weights and biases bounded by satisfy $M, B$ respectively. Besides ReLU is adopted as the activation function. Then, our Eqn. 7 has a unique absolutely continuous solution. The proof has been shown in Appendix A. Through our analysis, we show that the future trajectories are predictable based on the historical states, a crucial property in system dynamics modeling. Furthermore, our predictions can preserve the continuity embedded in mesh-based simulations. **MPNN-based Decoder.** In the end, we adopt a decoder $\psi_{dec}(\cdot)$ to generate the predictions at any given timestamps as follows: $$\hat{x}_i^t = \psi_{dec}(\{v_i^t\}_{i \in V}, \{p_{ij}\}_{(i,j) \in E}),$$ where $p_{ij} = p_i - p_j$ is reused to provide position information. The architecture of the decoder is the same as the MPNN in the encoder to ensure effective neighborhood learning. To train our graph ODE twins, we minimize the mean square error for different timestamps as follows: $$L_{ode} = \sum_{l=1}^{L} \sum_{i=1}^{N} ||x_i^{t_l} - \hat{x}_i^{t_l}||_2,$$ where each timestamp $t_l$ corresponds to a different $t_l$-step ahead prediction task. **Comparison with One-step Predictors.** Current one-step predictors generate the state for the next-time states (Pfaff et al., 2021; Shao et al., 2022; Cao et al., 2023) and then proceed in an autoregressive manner for the entire trajectories. In contrast, our graph ODE twins have two strengths as follows. Firstly, our approach is capable of capturing the continuous interactive dynamics that naturally occur in the mesh-based physical system. Secondly, our approach is optimized under the framework of multi-task learning. In particular, we generate predictions with different prediction lengths, each of which corresponds to a task. This strategy enables the model to learn long-term dependency with limited error accumulation, thereby enhancing the learning process. 4.2 Refinement with Interpolation While our graph ODE twins model is effective, it has the potential to underfit long-term trajectories in the multi-task learning framework. To address this issue, we introduce a refinement module, which uses the coarse long-term trajectories to improve the accuracy of short-term predictions, i.e., \( \hat{x}_{t_1}^i \). Given that both future states beyond the target one, i.e., \( \{x_{t_l}^i\}_{l=2}^L \), and current states, i.e., \( x_{t_0}^i \) are both available, this refinement process can be viewed as a form of interpolation. To be specific, we introduce a set of learnable parameters to serve as channel attention, which are the weights for interpolation. Moreover, the message passing procedure is performed in the observation space rather than the embedding space to learn the offset with enhanced efficiency. In formulation, we define the learnable weights as \( w^l \in \mathbb{R}^F \), and the aggregated observation \( z_{t_1}^i \) for node \( i \) can be written as: \[ z_{t_1}^i = x_{t_0}^i \odot w^0 + \sum_{l=1}^{L} \hat{x}_{t_l}^i \odot w^l, \] where target states \( \hat{x}_{t_1}^i \) is also involved for a more comprehensive offset mining and \( \odot \) denotes element-wise product of two vectors. Compared to standard interpolation, our approach facilitates information exchange across different channels, thereby enhancing the capacity to capture complex patterns. Finally, we stack several MPNNs for neighborhood interaction, which outputs the final predictions of the offset as follows: \[ \hat{x}_{t_1}^{i,\text{off}} = \psi_{\text{ref}}(\{z_{t_1}^i\}_{i \in V}, \{p_{ij}\}_{(i,j) \in E}), \] where \( \psi_{\text{ref}} \) has a similar architecture to the MPNN-based decoder, but with shallower layers. In the end, the refined predictions can be obtained by combining coarse predictions and offsets: \[ \hat{x}_{t_1}^{i,\text{ref}} = \hat{x}_{t_1}^i + \hat{x}_{t_1}^{i,\text{off}}. \] The mean square error is minimized for the target observations: \[ L_{re} = \sum_{i=1}^{N} ||\hat{x}_{t_1}^{i,\text{ref}} - x_{t_1}^i||_2. \] In contrast to coarse foresight over multiple timestamps, our refinement module targets a single timestamp, enabling us to further minimize the training loss. Unlike previous one-step predictors (Cao et al., 2023), our FAIR uses future predictions generated by the graph ODE twins for interpolation, which is empirically simpler than extrapolation. Our proposed FAIR employs a two-stage optimization strategy. In the first stage, we train the model to generate coarse long-term trajectories using a multi-task learning framework. In the second stage, we shift our focus to fine-tuning short-term predictions and eliminate additional supervision in Eqn. 9. A comprehensive summary of the learning algorithm can be found in Algorithm 1. This approach can be illustrated as a knowledge distillation framework (Gou et al., 2021; Cho & Hariharan, 2019; Park et al., 2019) where the teacher model (i.e., graph ODE twins) gains broad and generalized knowledge from multiple tasks, while the student model (i.e., refinement module) focuses on the specific target, which is capable of benefiting from the foresight provided by the teacher model as well. In addition, our foresight steps would generate a range of coarse predictions with potential noise, which would serve as the perturbation to the input for the refinement step to release potential overfitting. 4.3 Pyramid-like Propagation To generate long-term predictions, we alternatively conduct foresight generation and interpolation, resulting in a pyramid-like architecture as illustrated in Figure 1. By employing the graph ODE model, our FAIR is able to capture the long-term dynamics governed by the underlying rules. In addition, during our pyramid-like alternative propagation, our FAIR gains insight into future states which helps in mitigating potential error accumulation. A comprehensive summary of the inference algorithm can be found in Algorithm 2. Table 1: The RMSE results of the compared methods over different prediction lengths of 1, 50, and all time steps. The best results are displayed in bold. Partial results are consistent with Cao et al. (2023). OOM indicates out-of-memory. | Dataset | CylinderFlow RMSE ($\times 10^{-3}$) ↓ | Airfoil RMSE ($\times 10^{-1}$) ↓ | DeformingPlate RMSE ($\times 10^{-4}$) ↓ | InflatingFont RMSE ($\times 10^{-4}$) ↓ | |---------------|----------------------------------------|----------------------------------|-----------------------------------------|----------------------------------------| | | 1 | 50 | all | 1 | 50 | all | 1 | 50 | all | 1 | 50 | all | | GraphUNets | 8.09 | 187 | 1650 | 2.93 | 117 | 611 | 2.03 | 5.19 | 54.6 | OOM | OOM | OOM | | GNS | 2.61 | 50.7 | 176 | 5.29 | 175 | 639 | 2.23 | 3.21 | 17.2 | 2.14 | 36.9 | 50.7 | | MeshGraphNet | 2.26 | 43.9 | 107 | 4.35 | 166 | 695 | 1.98 | 2.88 | 15.1 | 1.95 | 17.8 | 36.5 | | MS-GNN-Grid | 2.20 | 27.4 | 84.9 | 2.68 | 122 | 556 | 2.20 | 2.78 | 14.8 | 1.87 | 32.4 | 37.8 | | Social-ODE | 2.06 | 36.5 | 99.0 | 2.61 | 135 | 564 | 2.17 | 3.11 | 15.9 | 1.80 | 14.4 | 29.8 | | BSMS-GNN | 2.04 | 24.2 | 83.7 | 2.88 | 110 | 421 | 2.87 | 3.18 | 16.9 | 1.77 | 10.8 | 22.0 | | FAIR (Ours) | 1.75 | 22.6 | 62.4 | 1.88 | 95 | 405 | 1.92 | 2.82 | 13.7 | 0.79 | 10.6 | 17.8 | Figure 3: Visualization of different methods on CylinderFlow at multiple time steps. We render the velocity in the fluid field with the time steps among 1, 50, 100, 300 and 500. 5 EXPERIMENTS 5.1 EXPERIMENTAL SETUP Datasets. To evaluate our FAIR, we employ four benchmark physics simulation datasets (Pfaff et al., 2021; Cao et al., 2023): 1) CylinderFlow, which simulates the flow of an incompressible fluid around a cylinder; 2) Airfoil, which focuses on the simulating compressible flow around an airfoil; 3) DeformingPlate, which involves the simulation of elastic plate deformation using an actuator; and 4) InflatingFont, which depicts the inflation of enclosed elastic surfaces. More detailed information regarding these datasets can be found in Appendix C. Baselines. We compare FAIR with a range of baselines, including neural simulation and a neural ODE method. The neural simulation methods include GraphUNets (Gao & Ji, 2019), GNS (Sanchez-Gonzalez et al., 2020), MeshGraphNet (Pfaff et al., 2021), MS-GNN-GRID (Lino et al., 2021), and BSMS-GNN (Cao et al., 2023). We also adopt a neural ODE approach Social-ODE (Wen et al., 2022). More details of these baselines can be found in Appendix D. Implementation. The root mean square deviation (RMSE) is taken as the metric to evaluate the performance. We vary the prediction lengths to show the performance in both short-term and long-term forecasting tasks. We set the step size $r$ and the future prediction number $L$ as 2 and 3 as default, respectively. More implementation details can be found in Appendix E. 5.2 PERFORMANCE COMPARISON Quantitative Comparison. The compared performance is recorded in Table 1. From the results, we can observe that GraphUNets achieve the worse performance compared with the other methods. Table 2: Ablation studies of different variants on four datasets. | Dataset | CylinderFlow RMSE ($\times 10^{-3}$) ↓ | Airfoil RMSE ($\times 10^{-1}$) ↓ | DeformingPlate RMSE ($\times 10^{-4}$) ↓ | InflatingFont RMSE ($\times 10^{-4}$) ↓ | |---------|----------------------------------------|----------------------------------|----------------------------------------|----------------------------------------| | FAIR V1 | 1.82 | 24.3 | 75.5 | 1.94 | | FAIR V2 | 2.03 | 27.2 | 67.5 | 2.31 | | FAIR V3 | 1.91 | 25.0 | 94.1 | 2.07 | FAIR 1.75 22.6 62.4 1.88 95 405 1.92 2.82 13.7 0.79 10.6 17.8 Figure 4: Visualization of different methods on Airfoil with the time steps among 1, 50, 100, 150, and 200. targeting at dynamical system modeling. This indicates the difficulties of the mesh-based physics simulations and we need to design special approaches for this problem. Furthermore, it can be observed that FAIR performs the best across all four datasets in terms of both short-term and long-term forecasting. In particular, compared to the best baseline on each dataset, FAIR achieves an average error reduction of 25.1% in 1-step simulations and 13.9% in all-step simulations. The significant performance improvement of FAIR can be attributed to two factors: (1) The introduction of our graph ODE twins in our foresight step. The graph ODE can significantly reduce the error accumulation in the multi-task learning framework, which also provides future information for short-term predictions; (2) The introduction of our refinement step. It can leverage coarse future predictions to refine short-term predictions with channel aggregation, thus mitigating the underfitting resulting from multi-task learning. Moreover, the performance of our FAIR on MS-GNN-Grid is a little worse than MS-GNN-Grid. The potential reason is the high complexity of DeformingPlate makes it harder to generate accurate foresight, which could deteriorate the model performance. Qualitative Comparison. We also conduct visualization to compare our FAIR with representative baselines and the ground truth. The compared results on CylinderFlow and Airfoil are shown in Figure 3 and Figure 5.1, respectively. From the results, we can make the following observations: (1) We can find that serious error accumulation occurs for one-step predictors (i.e., MeshGraphNet and BSMS-GNN). For example, in the last frame of Figure 5.1, MeshGraphNet and BSMS-GNN have a huge gap compared with the ground truth. (2) Our model can make precise long-term predictions consistently. The potential reason is that our ODE-based model can capture the continuous dynamics in physical systems and mitigate the error accumulation during propagation. In particular, all the baselines fail to reflect correct flow fields at the last time step while our FAIR can still approximate the ground truth. (3) Our proposed FAIR can also make accurate short-term predictions, which demonstrates that future foresight can benefit short-term predictions as well. 5.3 Analysis Ablation Studies. To evaluate the effectiveness of the subcomponents in FAIR, we introduce three model variants as follows: (1) FAIR V1, which removes the graph ODE twins module and utilizes Figure 5: (a), (b) The performance with respect to different step sizes (horizon) $r$ and prediction lengths $L$ on CylinderFlow. (c) RMSE of our FAIR and two baselines with respect to different prediction lengths on Airfoil. (d) The comparison of the running time of FAIR (with different future prediction numbers), BSMS (BSMS-GNN), and MGN (MeshGraphNet). one-step predictors for coarse foresight. (2) FAIR V2, which removes the refinement from the future module and outputs the results using graph ODE. (3) FAIR V3, which removes the multi-task learning framework and involves the next-time predictions in refinement. The compared results are presented in Table 2. From these results, we have the following observations. Firstly, by comparing FAIR V1 with the full model, we can validate that the graph ODE twins module can capture continuous dynamics to provide high-quality future information for accurate predictions. Secondly, our full model achieves better performance than FAIR V2, which validates that the refinement step is indispensable by preventing potential underfitting in the multi-task learning framework. Thirdly, FAIR V3 performs much worse than the full model, which validates that future information makes a critical contribution to effective mesh-based simulations. Sensitivity Analysis. We investigate the impact of different parameters on the performance of FAIR, i.e., the step size $r$, and the number of future predictions $L$. First, we vary $r$ in $\{1, 2, 3, 4, 5\}$ with the other parameters fixed and the results are shown in Figure 5.3(a). We can observe that the errors first decrease and then increase as $r$ rises. The potential reason is that when $r$ is small, a larger $r$ can provide a larger horizon while too large $r$ would be far away from our target, which makes the interpolation unreliable. Next, we vary the prediction length $L$ in $\{1, 2, 3, 4, 5\}$, and the results are shown in Figure 5.3(b). We can observe an error reduction when $L$ rises before saturation, indicating that including more future information boosts model performance. Predictions at Different Time Steps. We compare the prediction errors of our FAIR and two baselines with different time steps in terms of RMSE on Airfoil. The results are shown in Figure 5.3(c). We can find that FAIR exhibits stronger modeling capabilities with relatively lower errors at large time steps while both two baselines suffer from serious error accumulations. This validates that our FAIR utilizes foresight to reduce the error accumulation for long-term predictions. Efficiency Analysis. We analyze the efficiency of MeshGraphNet, BSMS-GNN, and FAIR with varying the number of future predictions $L$. The computational time of one-step predictions on a single NVIDIA A100 GPU is reported. As shown in Figure 5.3(d), FAIR shows similar efficiency as the baseline methods, demonstrating that FAIR does not introduce major additional time costs. Furthermore, it is observed that the time cost of FAIR increases as $L$ rises. Considering there is a trade-off between efficiency and effectiveness, we set $L$ as 3 in our implementation. 6 CONCLUSION In this paper, we study the problem of long-term mesh-based physics simulations and propose a new approach FAIR to address it. Our FAIR utilizes a future-to-present perspective, which consists of two steps, i.e., foresight and refinement, for accurate simulations. In the first step, we use a graph ODE model that integrates previous states to learn coarse long-term trajectories using a multi-task learning framework. In the second step, we employ a channel aggregation strategy to aggregate the trajectories for refined short-term predictions. Our proposed FAIR can provide accurate long-term trajectories by the alternative propagation of foresight and refinement. We believe that our study provides a brand-new perspective in learning long-term mesh-based simulations. However, our work has a limitation that it cannot make accurate predictions when system states fluctuate in unsteady flows, a complicated scenario encountered in fluid mechanics. In the future work, we plan to expand the proposed FAIR to accommodate more complex simulations in physical and biological applications such as molecular dynamics simulations and protein structural analysis. ETHICS STATEMENT We acknowledge that all co-authors of this work have read and committed to adhering to the ICLR Code of Ethics. REPRODUCIBILITY STATEMENT To increase reproducibility, we have provided all the details of FAIR in Appendix E. Our code is available at https://anonymous.4open.science/r/FAIR anonymously. We will make the code public after the anonymity period. The datasets utilized in this paper are publicly available and representative ones. We obey the original settings and divisions without incorporating any additional data. The baseline methods that we utilize are all publicly accessible. The experimental results of the baselines are consistent with Cao et al. (2023). REFERENCES Ferran Alet, Adarsh Keshav Jeewajee, Maria Bauza Villalonga, Alberto Rodriguez, Tomas Lozano-Perez, and Leslie Kaelbling. Graph element networks: adaptive, structured computation and memory. In ICML, 2019. Balakumar Balachandran, Tamás Kalmár-Nagy, and David E Gilsinn. Delay differential equations. Springer, 2009. Kaifeng Bi, Lingxi Xie, Hengheng Zhang, Xin Chen, Xiaotao Gu, and Qi Tian. Accurate medium-range global weather forecasting with 3d neural networks. Nature, 2023. Yadi Cao, Menglei Chai, Minchen Li, and Chenfanfu Jiang. Efficient learning of mesh-based physical simulation with bi-stride multi-scale graph neural network. In ICML, 2023. Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In NeurIPS, 2018. Jang Hyun Cho and Bharath Hariharan. On the efficacy of knowledge distillation. In ICCV, 2019. Alexander Chudik, Kamiar Mohaddes, M Hashem Pesaran, Mehdi Raissi, and Alessandro Rebucci. A counterfactual economic analysis of covid-19 using a threshold augmented multi-country model. Journal of International Money and Finance, 2021. Patrick Dendorfer, Vladimir Yugay, Aljosa Osep, and Laura Leal-Taixé. Quo vadis: Is trajectory forecasting the key towards long-term multi-object tracking? In NeurIPS, 2022. Fabio V Difonzo, Pawel Przybylowicz, and Yue Wu. Existence, uniqueness and approximation of solutions to carathéodory delay differential equations. Journal of Computational and Applied Mathematics, 2024. Qiujie Dong, Zixiong Wang, Manyi Li, Junjie Gao, Shuangmin Chen, Zhenyu Shu, Shiqing Xin, Changhe Tu, and Wenping Wang. Laplacian2mesh: Laplacian-based mesh understanding. IEEE Transactions on Visualization and Computer Graphics, 2023. Thomas D Economon, Francisco Palacios, Sean R Copeland, Trent W Lukaczyk, and Juan J Alonso. Su2: An open-source suite for multiphysics simulation and design. Aiaa Journal, 2016. Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019. Stathi Fotiadis, Eduardo Pignatelli, Mario Lino Valencia, Chris Cantwell, Amos Storkey, and Anil A Bharath. Comparing recurrent and convolutional neural networks for predicting wave propagation. arXiv preprint arXiv:2002.08981, 2020. 1https://github.com/google-deepmind/deepmind-research/tree/master/meshgraphnets 2https://github.com/Eydcao/BSMS-GNN
TFKIfhvdmZ
CMA-ES is introduced but not really elaborated upon. It seems to be very important in sampling different policy parameters. how is branching the policy parameters in different directions in measure space done?
PROXIMAL POLICY GRADIENT ARBORESCENCE FOR QUALITY DIVERSITY REINFORCEMENT LEARNING Sumeet Batra University of Southern California Los Angeles, CA 90089 ssbatra@usc.edu Matthew C. Fontaine University of Southern California Los Angeles, CA 90089 mfontain@usc.edu Stefanos Nikolaidis University of Southern California Los Angeles, CA 90089 nikolaid@usc.edu Bryon Tjanaka University of Southern California Los Angeles, CA 90089 tjanaka@usc.edu Aleksei Petrenko University of Southern California Los Angeles, CA 90089 petrenko@usc.edu Gaurav S. Sukhatme University of Southern California Los Angeles, CA 90089 gaurav@usc.edu ABSTRACT Training generally capable agents that thoroughly explore their environment and learn new and diverse skills is a long-term goal of robot learning. Quality Diversity Reinforcement Learning (QD-RL) is an emerging research area that blends the best aspects of both fields – Quality Diversity (QD) provides a principled form of exploration and produces collections of behaviorally diverse agents, while Reinforcement Learning (RL) provides a powerful performance improvement operator enabling generalization across tasks and dynamic environments. Existing QD-RL approaches have been constrained to sample efficient, deterministic off-policy RL algorithms and/or evolution strategies, and struggle with highly stochastic environments. In this work, we, for the first time, adapt on-policy RL, specifically Proximal Policy Optimization (PPO), to the Differentiable Quality Diversity (DQD) framework and propose additional improvements over prior work that enable efficient optimization and discovery of novel skills on challenging locomotion tasks. Our new algorithm, Proximal Policy Gradient Arborescence (PPGA), achieves state-of-the-art results, including a 4x improvement in best reward over baselines on the challenging humanoid domain. 1 INTRODUCTION Quality Diversity (QD) algorithms enable the exploration and discovery of diverse skills in a behavior space. For example, a QD algorithm can train different locomotion gaits for a walker (Cully et al., 2015), discover different grasping trajectories for a manipulator (Morel et al., 2022), or generate a diverse range of human faces (Fontaine & Nikolaidis, 2021). However, since these algorithms are generally oriented towards solving exploration problems, they struggle to find performant policies in high-dimensional robot learning tasks. QD-RL is an emerging field that attempts to combine the principled exploration capabilities of QD with the powerful performance improvement capabilities of RL. Prior methods have leveraged off-policy RL, specifically TD3, to estimate the gradient of performance, and either Evolution Strategies (ES) or TD3 to estimate the gradient of diversity in order to search for diverse, high-quality policies. They have shown success in exploration problems and certain robot locomotion tasks (Nilsson & Cully, 2021; Pierrot et al., 2022; Tjanaka et al., 2022b). Nonetheless, there remains a gap in performance between QD-RL and standard RL algorithms on continuous control tasks. Furthermore, off-policy RL algorithms were not designed with massive parallelization in mind, and there is little literature that explores how to leverage modern massively- parallelized simulators with these algorithms, whereas there are numerous works exploring on-policy RL in these regimes (Makoviychuk et al., 2021; Rudin et al., 2021; Handa et al., 2022; Batra et al., 2021; Huang et al., 2022). Figure 1: PPGA finds a diverse archive of high-performing locomotion behaviors for a humanoid agent by combining PPO gradient approximations with Differentiable Quality Diversity algorithms. The archive’s dimensions correspond to the measures $m_1$ and $m_2$, i.e., the proportion of time that the left and right feet contact the ground. The color of each cell shows the objective value, i.e., how fast the humanoid moves. For instance, jumping moves the humanoid forward quickly, with the left and right feet individually contacting the ground 30% and 22% of the time, respectively. From our investigation of prior methods, simply combining existing QD methods with an RL algorithm tends not to scale well to high-dimensional, highly dynamical systems such as Humanoid. For example, all QD-RL algorithms for locomotion to date use non-Markovian measures of behavioral diversity, which in many cases prevents direct RL-optimization. Most algorithms instead opt for policy parameter mutation, which struggles to scale well with deep neural networks. Prior methods that investigated combining Differentiable Quality Diversity and off-policy RL (Tjanaka et al., 2022b) achieved similar results as other baselines. However, given the gap in performance between standard RL and QD-RL algorithms in terms of best-performing policy, we believe that DQD algorithms, under a different formulation more synergistic with its underlying mechanisms, can close this gap. To this end, we leverage Proximal Policy Optimization (PPO) (Schulman et al., 2017), a popular on-policy RL algorithm, with Differentiable Quality Diversity (DQD) (Fontaine & Nikolaidis, 2021) because of the already present synergy. Specifically, DQD algorithms CMA-MEGA (Fontaine & Nikolaidis, 2021), and its more recent variation CMA-MAEGA (Fontaine & Nikolaidis, 2023), maintain a single search point (or policy in the case of RL) that moves through the behavior space and fills in new, unexplored regions with offspring policies constructed via gradient information collected from online data. It is through this high level view that we see the emergent synergy between PPO and DQD, in that PPO can be used to collect gradient estimates from online data when one or both of the objective and measure functions are Markovian and non-differentiable. We make several key changes to CMA-MAEGA and PPO to maximally leverage their synergy. Our new algorithm, Proximal Policy Gradient Arborescence (PPGA), to the best of our knowledge, is the first QD-RL algorithm to not only achieve 4x performance in best reward on the humanoid domain, but achieve the same level of performance as PPO without sacrificing any of the diversity in the discovered policies. Specifically, we make the following contributions: 1. We propose a vectorized implementation of PPO, VPPO, that jointly computes the objective and measure gradients with little overhead and without running separate PPO instances for each task. 2. We generalize prior CMA-based DQD algorithms as instances of Natural Evolution Strategies (NES) and show that contemporary NES methods, specifically xNES, enable better training stability and performance for DQD algorithms. 3. We introduce the notion of Markovian Measure Proxies (MMPs), which makes the typically non-Markovian measure functions used in QD-RL amenable to RL-optimization. 4. We propose a new method to move the current search point, hereon referred to as the "search policy", to unexplored regions of the archive by iteratively "walking" it using collected online data and RL optimization of a novel multi-objective reward function. 2 BACKGROUND 2.1 DEEP REINFORCEMENT LEARNING Reinforcement Learning algorithms search for a policy, a mapping of states to actions, that maximizes cumulative reward in an environment. RL assumes the discrete-time Markov Decision Process (MDP) formalism \((S, A, R, P, \gamma)\) where \(S\) and \(A\) are the state and action spaces respectively, \(R(s, a)\) is the reward function, \(P(s'|s, a)\) defines state transition probabilities, and \(\gamma\) is the discount factor. The RL objective is to maximize the discounted episodic return of a policy \(\mathbb{E} \left[ \sum_{k=0}^{T-1} \gamma^k R(s_k, a_k) \right]\) where \(T\) is episode length. Deep Reinforcement Learning solves the RL problem by finding a policy \(\pi_\theta(a_t|s_t)\) parameterized by a deep neural network \(\theta\) that represents a state-action mapping. On-policy Deep RL methods directly learn the policy \(\pi_\theta\) using experience collected by that policy or a recent version thereof. Contemporary methods (Mnih et al., 2016) fit the value function \(V_\phi(s_t)\) to discounted returns and estimate the advantage \(\hat{A}_t = \sum_{k=t}^{T-1} \gamma^k R(s_k, a_k) - V_\phi(s_t)\), which corresponds to the value of an action over the current policy (Schulman et al., 2016). From here, the gradient of the objective w.r.t. \(\theta\), or policy gradient, can be estimated as \(\mathbb{E}_{\pi_\theta} \left[ \nabla_\theta \log \pi_\theta(a_t|s_t) \hat{A}_t \right]\), and the policy \(\pi_\theta\) is trained using mini-batch gradient descent. Trust region policy gradient Deep RL methods constrain the policy updates to maintain the proximity of \(\pi_\theta\) to the behavior policy \(\pi_{\theta_{old}}\) that was used to collect the experience. TRPO (Schulman et al., 2015) takes the largest policy improvement step that satisfies the strict KL-divergence constraint. Proximal Policy Optimization (PPO) (Schulman et al., 2017) approximates the trust region by optimizing a clipped surrogate objective where \(r_t(\theta) = \frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_{old}}(a_t|s_t)}\) is the importance sampling ratio: \[ L(\theta) = \mathbb{E}_{\pi_\theta} \left[ \min(r_t(\theta) \hat{A}_t), \text{clip}(r_t(\theta), 1 - \epsilon, 1 + \epsilon) \hat{A}_t \right]. \] Off-policy Deep RL algorithms learn parameterized state-action value functions \(Q_\theta(s_t, a_t)\) that estimate the value of taking action \(a_t\) in state \(s_t\). Then, actions are taken with a greedy policy \(\arg \max_a Q_\theta(s_t, a_t)\), or an \(\varepsilon\)-greedy variation thereof. Q-functions can be learned from experience collected by recent or past versions of the policy or another policy altogether. In continuous control problems, it can be difficult to find \(a^\star = \arg \max_a Q_\theta(s_t, a_t)\) due to an infinite number of possible actions. To work around this issue, off-policy methods such as DDPG (Lillicrap et al., 2016) and TD3 (Fujimoto et al., 2018) learn a deterministic policy \(\mu_\phi(s_t)\) by solving \(\max_\phi(Q_\theta(s_t, \mu_\phi(s_t)))\) using gradient ascent. Other off-policy methods, such as soft actor-critic (SAC) (Haarnoja et al., 2018), maintain an explicit policy \(\pi_\theta\), but similarly derive the policy gradient from the critic, allowing them to learn from off-policy data as well. 2.2 QUALITY DIVERSITY OPTIMIZATION Unlike single-objective optimization methods such as RL, Quality Diversity algorithms search for an archive of high-performing, diverse policies. An optimal archive essentially answers the question, "how does performance change with behavior?" by mapping out the optimization landscape of a pre-defined behavior space. The QD problem (Chatzilygeroudis et al., 2021) assumes an objective function \(f(\cdot)\) that quantifies the agent’s performance and \(k\) measure functions \(m_1(\cdot) ... m_k(\cdot)\) that characterize the agent’s behavior. The measure functions, represented jointly as \(m(\cdot)\), define an embedding the QD algorithm should span with diverse policies. The QD objective is to find a policy that maximizes \(f\) for every possible output of \(m\). However, the embedding formed by \(m\) is continuous, so the embedding space is discretized into a tessellation of \(M\) cells. The QD objective then becomes to maximize \(\sum_{i=1}^{M} f(\theta_i)\), where \(\theta_i\) is a policy whose measures \(m(\theta_i)\) fall in cell \(i\) of the tessellation. QD algorithms originated with NSLC (Lehman & Stanley, 2011a,b) and MAP-Elites (Mouret & Clune, 2015; Cully et al., 2015). While these early QD algorithms built on genetic algorithms, modern QD algorithms incorporate optimization techniques like evolution strategies (Fontaine et al., 2020; Conti et al., 2018; Colas et al., 2020), gradient ascent (Fontaine & Nikolaidis, 2021; 2023), and differential evolution (Choi & Togelius, 2021). Several works have applied QD optimization to generative design (Hagg et al., 2020; Gaier et al., 2018), procedural content generation (Gravina 2.3 Differentiable Quality Diversity The Differentiable Quality Diversity (DQD) (Fontaine & Nikolaidis, 2021) algorithm Covariance Matrix Adaptation Map Elites via Gradient Arborescence (CMA-MEGA) considers the first-order QD problem where the objective and measure functions are differentiable, with gradients w.r.t. policy parameters represented as $\nabla f = \frac{\partial f}{\partial \theta}$ and $\nabla m = \left[ \frac{\partial m_1}{\partial \theta}, ..., \frac{\partial m_k}{\partial \theta} \right]$. CMA-MEGA maintains a search policy $\pi_{\theta_\mu}$ in policy parameter space $(\theta_\mu \in \mathbb{R}^N)$ corresponding to some cell in the archive given by the measures $<m_1(\pi_{\theta_\mu}), ..., m_k(\pi_{\theta_\mu})>$, and a search distribution in objective-measure gradient coefficient space maintained by CMA-ES (Hansen, 2016), a zeroth-order optimizer that optimizes the coefficient distribution to produce coefficient vectors that point in the direction of greatest archive improvement. At a high level, CMA-MEGA branches off policies from the search policy in order to locally fill the archive, and then steps the search policy to new, unexplored regions of the archive. During the branching step, the gradients $<\nabla f, \nabla m>_{\theta_\mu}$ and $\lambda$ gradient coefficient vectors $<c_0, ..., c_k>_{1,...,\lambda}$ sampled from the CMA-ES search distribution $c_i \sim \mathcal{N}(\mu, \Sigma) \in \mathbb{R}^{k+1}$ are combined via the dot product i.e., $<\nabla f, \nabla m>_{\theta_\mu} \cdot c_1, ...$ to produce local gradients $\nabla_1, ..., \nabla_\lambda$ around the search policy. Applying the gradients to the search policy gives us $\lambda$ branched policies $\pi_{\theta_1}, ..., \pi_{\theta_\lambda}$. The new policies can then be ranked by how much they improve the archive, i.e., $f(\pi_{\theta_i}) - f(\pi_{\theta_{old}}), i \in [1, \lambda]$, where $\pi_{\theta_{old}}$ is the incumbent policy in the archive corresponding to the same cell as $\pi_{\theta_\mu}$. Branched policies that map to new, unexplored cells in the archive have $f(\pi_{\theta_{old}})$ set to some minimum threshold. This implicitly biases the ranking towards the exploration of new, unvisited cells. This ranking is given to CMA-ES, which internally performs an update that steps the search distribution in the direction of the natural gradient w.r.t. greatest archive improvement. CMA-ES returns weights $w_1, ..., w_\lambda$ such that $\nabla_{step} = <w_1, ..., w_\lambda> \cdot <\nabla_1, ..., \nabla_\lambda>$ is the natural gradient in parameter space. This weighted linear recombination of the branching gradients is then used to step the search policy in the direction of greatest archive improvement $\theta_\mu \leftarrow \theta_\mu + \alpha \nabla_{step}$. The current state-of-the-art DQD algorithm, Covariance Matrix Adaptation Map Annealing via Gradient Arborescence (CMA-MAEGA) (Fontaine & Nikolaidis, 2023), introduced the concept of soft archives to CMA-MEGA. Instead of maintaining the best policy in each cell, the archive maintains a threshold $t_e$ and updates the threshold by $t_e \leftarrow (1 - \alpha)t_e + \alpha f(\pi_{\theta_\mu})$ when a new policy $\pi_{\theta_\mu}$ crosses the threshold of its cell $e$. The hyperparameter $0 \leq \alpha \leq 1$, referred to as the archive learning rate, controls how much time is spent optimizing a region of the archive before exploring a new region. Soft archives have many theoretical and practical benefits discussed in prior work (Fontaine & Nikolaidis, 2023). Our proposed PPGA algorithm builds directly on CMA-MAEGA. 2.4 Quality Diversity Reinforcement Learning Unlike the standard DQD formulation in which the analytical gradients of $f$ and $m$ can be computed, the QD-RL setting considers MDPs in which these functions are non-differentiable and must be approximated with model-free RL. The gradient approximations of $f$ and $m$ can be used to improve the performance and diversity of agents in an archive. QD-RL methods can be roughly divided into two subgroups. The first set of approaches directly optimizes over the entire archive by sampling existing policies in the archive and applying operations to the policies’ parameters that either improve their performance or diversity. For example, PGA-ME (Nilsson & Cully, 2021) collects experience from evaluated agents into a replay buffer and uses TD3 to derive a policy gradient that improves the performance of randomly sampled agents from the archive, while using genetic variation (Vassiliades & Mouret, 2018) on the same set of agents to improve diversity and fill new, unexplored cells. Similarly, QDPG (Pierrot et al., 2022) derives a policy and diversity gradient using TD3 and applies these operators to randomly sampled agents in the archive. Whereas the first family of QD-RL algorithms simultaneously search the behavioral embedding in many different regions at once, the second family uses the DQD formulation i.e., maintains a single search policy that explores new local regions one at a time using objective-measure gradient approximations. In prior work (Tianaka et al., 2022b), the authors considered objective gradient approximations via TD3 and OpenAI-ES, while approximating the measure function gradients with OpenAI-ES. In this work, we notice the unique on-policy nature of DQD algorithms and present a novel formulation that exploits their synergy with PPO. 3 PROPOSED METHOD: THE PROXIMAL POLICY GRADIENT ARBORESCENCE ALGORITHM We begin with the DQD algorithm CMA-MAEGA as our foundation. The algorithm can be roughly divided into three phases: (1) computing the objective-measure gradients for the branching phase, (2) providing the relative ranking of each branched policy w.r.t. the QD objective to CMA-ES, and (3) stepping the search policy in the direction of greatest archive improvement. Sections 3.1 and 3.2 focus on enabling RL optimization for phase one, 3.3 explores the connection between CMA-ES and NES and how this can improve training stability in phase two, and section 3.4 describes our method for walking the search policy with PPO in phase three. 3.1 MARKOVIAN MEASURE PROXIES QD problems often contain non-Markovian measure functions. For robot locomotion tasks, the standard measure function is the proportional foot contact time with the ground $m_i(\theta) = \frac{1}{T} \sum_{t=0}^{T} \delta_i(s_t)$ for each leg $i$, $i = 1...k$, where the Kronecker delta $\delta_i(s_t)$ indicates whether the $i$'th leg is in contact with the ground or not in state $s_t$, and $T$ is the episode length. However, this measure function is defined on a trajectory (i.e., the whole episode), making it non-Markovian and thus preventing us from using RL to estimate its gradient. To solve this issue, we introduce the notion of a Markovian Measure Proxy (MMP), which is a surrogate function that obeys the Markov property and has a positive correlation with the original measure function. For locomotion tasks, we can construct an MMP by simply removing the dependency on the trajectory and making the original measure function state-dependent, i.e., setting it to be $\delta_i(s_t)$. We can then use the exact same MDP as the standard RL formulation and replace the reward function with $\delta_i(s_t)$. 3.2 POLICY GRADIENTS FOR DIFFERENTIABLE QUALITY DIVERSITY OPTIMIZATION PPO is an attractive choice as our objective-measure gradient estimator because of its ability to scale with additional parallel environments. Being an approximate trust region method, the constrained policy update step provides some robustness to noisy and non-stationary objectives. This is particularly important in the QD-RL setting, where the QD-objective is highly non-stationary – that is, the QD-objective changes with the state of the archive, which is updated on each QD iteration. We treat the RL objective and $k$ MMPs, each one optimized by an actor-critic pair, as reward functions to optimize. Rather than spawning a new PPO instance with a separate actor and critic network for the RL objective $f$ and each MMP $\delta_i(s_t)$ independently, we start with a single actor $\pi_{\theta_\mu}(a|s)$ parameterized by the policy parameters $\theta_\mu$ of the current search policy, and $k + 1$ value functions $V_{\phi_f}, V_{\phi_{\delta_1}}, ..., V_{\phi_{\delta_k}}$. The actor is replicated $k + 1$ times, each one paired with a corresponding value function. The actors are combined into a single vectorized policy $\pi_{<\theta_1^*, ..., \theta_{k+1}^*>}(a|s)$ that jointly optimizes $< f, \delta_1(s_t), ..., \delta_k(s_t) >$ for $N_1$ iterations, where $N_1$ is a configurable hyperparameter. We additionally modify the computation of the policy gradient into a batched policy gradient method, where intermediate gradient estimates of each function w.r.t. policy params only flow back to the parameters corresponding to respective individual policies during minibatch gradient descent. After $N_j$ iterations, we separate the vectorized policy, giving a set of subpolicies with optimized parameters $\pi_{\theta_f}(a|s), \ldots, \pi_{\theta_{\delta_k}}(a|s)$ that perform better w.r.t. their objectives. In the case of measure functions where $m_i(\cdot)$ is the proportion foot contact time of the $i^{th}$ leg, $m_i(\pi_{\theta_{\delta_k}}(s_t)) > m_i(\pi_{\theta_\mu})$, i.e. the resulting policy will have a higher proportion foot contact time over the starting policy after optimization. Subtracting the initial parameters ($\theta_\mu$) from each resulting policy gives us the desired objective-measure Jacobian $\left[ \frac{\partial f}{\partial \theta_\mu}, \frac{\partial s_1}{\partial \theta_\mu}, \ldots, \frac{\partial s_k}{\partial \theta_\mu} \right]$, which can be linearly recombined in various ways to branch policies from $\theta_\mu$. In addition to the VPPO implementation, we introduce the option to make the learnable action standard deviation parameter static. In the typical case, PPO decays this parameter over time in order to converge to a quasi-deterministic optimal policy at the expense of further exploration. In some environments, narrowing the action distribution can indeed help promote consistent optimal performance. In other environments, this effect can hinder the QD algorithm’s ability to branch policies into new cells, given that the outer QD optimization loop relies on gradient estimates produced by PPO to discover unexplored regions of the archive. In environments where we observe this negative effect, we disable gradient flow to the action standard deviation parameter. Finally, in order to address environmental uncertainty, we insert new policies based on their performance and behavior averaged over 10 parallel environments. We leverage GPU acceleration to quickly batch process many parallel environments over a population of branched policies. ### 3.3 Connection to Natural Evolution Strategies We replace CMA-ES with a Natural Evolution Strategy (NES) to increase the stability and performance of CMA-MAEGA on noisy RL environments. CMA-based variants of PPGA diverged during training. Prior work (Müller & Glasmachers, 2018) showed that CMA-ES struggled to evolve deep neural network controllers with dimensionality $\mathbb{R}^d$ on stochastic RL environments. However, CMA-MAEGA uses CMA-ES to maintain search distribution in objective-measure gradient coefficient space $\mathbb{R}^{k+1} < \mathbb{R}^d$, where $k + 1$ can be as small as three dimensions, implying that CMA-ES should still be effective in this low-dimensional space. It was then puzzling to find consistent divergence during the training of our CMA-based algorithm. We hypothesize that the culprit is the cumulative step-size adaptation (CSA) mechanism employed by CMA-ES. CMA-ES uses evolution paths $\rho_{\sigma}^{(g)}$ to adapt the step size $\sigma^{(g)}$ between successive generations $(g)$. The mechanisms by which $\sigma^{(g)}$ are updated assume a fairly non-noisy and stationary objective $f$. However, the application of CMA-ES to QD optimization on stochastic RL environments presumes the exact opposite. That is, the RL objective $f_{RL}$ is very noisy, and the QD-objective $f_{QD} = g(f_{RL}(\cdot))$, which is a function of the RL objective, is highly non-stationary, since the state of the archive $A$ changes the direction of greatest archive improvement on every iteration. To address the training divergence, we propose using exponential evolution strategies (xNES) (Glasmachers et al., 2010), a more recent and theoretically well-motivated method, as a drop in replacement for CMA-ES. Prior works have shown strong links between xNES and CMA-ES, and generalize both methods as instances of natural evolution strategies (Akimoto et al., 2010; Glasmachers et al., 2010). In fact, the update step in xNES is equivalent to CMA-ES up to the use of evolution paths. We refer to these prior works for an in-depth comparison. More generally, we believe any natural evolution strategy can be used to maintain and update the search distribution over gradient coefficients in this and any prior CMA-based DQD method. ### 3.4 Walking the Search Policy In standard DQD, $\nabla_{step}$ is computed via weighted linear recombination to produce a gradient vector that steps the search policy in the least explored direction of the archive. However, the resulting gradient vector is a linearized approximation around the current search policy $\theta_\mu$ and thus cannot be reused to take multiple gradient steps in a non-convex optimization problem. It would be remiss not to leverage the highly-parallelized VPPO implementation to "walk" the search policy over many steps in the direction of greatest archive improvement. We make the key observation that the mean gradient coefficient vector $c_\mu$ of the updated search distribution maintained by xNES points in the direction of greatest archive improvement for the next iteration of the QD algorithm. Thus, we construct a new multi-objective reward function for VPPO to optimize by taking the dot product between the gradient coefficient vector and the objective and measure proxies \(< c_{\mu_0}, ..., c_{\mu_{k+1}} > \cdot < f, \delta_1, ..., \delta_k >\). Optimizing this function with VPPO allows us to walk the search policy \(\theta_\mu\) in the direction of greatest archive improvement by iteratively taking conservative steps, where the magnitude of the movement is controllable by hyperparameter \(N_2\). This objective is stationary for all \(N_2\) steps, and is only updated after the subsequent QD iteration. We provide pseudocode in Appendix A. 4 EXPERIMENTS We evaluate our algorithm on four different continuous-control locomotion tasks derived from the original Mujoco environments (Todorov et al., 2012): Ant, Walker2d, Half-Cheetah, and Humanoid. The standard objective in each task is to maximize forward progress and robot stability while minimizing energy consumption. We use the Brax simulator to leverage GPU acceleration and massive parallelization of the environments. The observation space sizes for these environments are 87, 17, 18, and 227, respectively, and the action space sizes are 8, 6, 6, and 17, respectively. The standard Brax environments are augmented with wrappers that determine the measures of an agent in any given rollout as implemented in QDax (Lim et al., 2022), where the number of measures of an agent is equivalent to the number of legs. The measure function is the number of times a leg contacts the ground divided by the length of the trajectory. We implement PPGA in pyribs (Tjanaka et al., 2023), with our VPPO implementation based on CleanRL’s implementation of PPO (Huang et al., 2022). Most experiments were run on a SLURM cluster where each job had access to an NVIDIA RTX 2080Ti GPUs, 4 cores from a Intel(R) Xeon(R) Gold 6154 3.00GHz CPU, and 108GB of RAM. Some additional experiments and ablations were run on local workstations with access to an NVIDIA RTX 3090, AMD Ryzen 7900x 12 core CPU, and 64GB of RAM. 4.1 COMPARISONS We compare our results to current state-of-the-art QD-RL algorithms: Policy Gradient Assisted MAP-Elites (PGA-ME)\(^1\), Quality Diversity Policy Gradient (QDPG) implemented in QDax (Lim et al., 2022), and CMA-MAEGA(TD3, ES) implemented in pyribs (Tjanaka et al., 2023). We also compare against the state-of-the-art ES-based QD-RL algorithm, separable CMA-MAE (sep-CMA-MAE) (Tjanaka et al., 2022a), which allows evolutionary QD techniques to scale up to larger neural networks. Finally, in order to verify our hypothesis on the emergent synergy between PPO and DQD, we provide an ablation where TD3 is used as a drop-in replacement for PPO in PPGA, which we will refer to as TD3GA going forward. Details on the TD3GA design choices and additional ablations, such as comparing against standard PPO, can be found in the appendix. The same archive resolutions and network architectures are used for all baselines. A full list of shared hyperparameters is in Appendix B. We use an archive learning rate of 0.1, 0.15, 0.1, and 1.0 on Humanoid, Walker2d, Ant, and Half-Cheetah, respectively. Adaptive standard deviation is enabled for Ant and Humanoid. We reset the action distribution standard deviation to 1.0 on each iteration in all other environments. Figure 3: 2D Archive visualizations of PPGA compared to the current state-of-the-art QD-RL algorithm PGA-ME. We use 50x50 archives to show detail. \(^1\)A comparison on Humanoid to PBT-ME (SAC), a recent QD-RL method, can be found in Appendix H. PBT-ME (SAC) was trained with Google TPUs. Due to computational constraints, we were only able to provide a comparison on one task. We conduct our experiments using the following criteria: **QD-score**, which is the sum of scores of all nonempty cells in the archive, and **coverage**, which is the percentage of nonempty cells in the archive, have been historically used by QD algorithms to measure performance and diversity respectively, and so we include them as metrics. However, these metrics have a number of edge cases that make them imperfect measures of performance and diversity. For example, an algorithm that fills 100% of the archive with low-performing policies can have a higher QD-score and coverage than a QD algorithm that fills fewer cells with high-performing policies. To more accurately represent the performance and diversity of a given algorithm, we additionally include plots of the **Complementary Cumulative Distribution Function** (CCDF), originally presented in (Vassiliades et al., 2016), which shows what percentage of policies in the archive achieve a reward of $R$ or greater for all possible values of $R$ on the $x$-axis. The CCDF attempts to capture notions of quality of policies in the archive and diversity, while also shedding light on how the policies are distributed w.r.t. performance. Finally, we include the **best reward** metric, denoting the highest-performing policy the algorithm was able to discover. Figures 3 and 4 show that PPGA outperforms baselines in best reward and QD-score, achieving comparable coverage scores on all tasks except Ant, and generating much more illuminated archive heatmaps with a diverse range of higher performing policies than the current state of the art, PGA-ME. Notably, PPGA is the only algorithm that solves Humanoid, achieving over 4x improvement in best-performing policy and QD score compared to baselines. More important than QD-Score and Coverage are the CCDF plots. At $x = 0$, all policies in the archive are included, i.e., $x = 0$ encapsulates the coverage score. CCDF plots provide a better representation of "quality" than QD-score, since we can see how the policies in the archive are distributed. Except for Ant, PPGA produces distributions where more of the mass is distributed to the right where the high-performing policies lie. In Figure 5, we find evidence that PPO has an important synergy with DQD that is perhaps missing in other RL algorithms. TD3GA fails to find high performing policies on Humanoid. Achieving 100% coverage is indicative of the step size $\sigma$ in xNES exploding and producing highly stochastic policies that, by chance, land in far away cells. This typically occurs when xNES cannot fit a covariance matrix to the data, which in this case are weighted linear combinations of $\nabla f$, $\nabla m$ produced by TD3. ### 4.2 Post-Hoc Archive Analysis QD algorithms are known to struggle with reproducing performance and behavior in stochastic environments. To determine the replicability of our agents, we follow the guidelines in Flageat et al. Figure 5: PPGA vs TD3GA on Humanoid on the standard QD metrics. All plots are averaged over 4 seeds. The shaded regions are the 95% bootstrapped confidence intervals. ![Graphs showing QD metrics comparison](image) | | QD-Score | Cov | Best | |----------------|----------|-----|------| | Humanoid | | | | | PPGA | $1.02 \times 10^5$ | 0.52 | 8324 | | PGA-ME | $1.08 \times 10^4$ | 0.39 | 446 | | sep-CMA-MAE | $1.27 \times 10^4$ | 0.42 | 498 | | QDPG | $6.53 \times 10^3$ | 0.26 | 412 | | CMA-MAEGA (TD3, ES) | $4.10 \times 10^3$ | 0.21 | 352 | | | QD-Score | Cov | Best | |----------------|----------|-----|------| | Walker2d | | | | | PPGA | $1.06 \times 10^5$ | 0.39 | 4702 | | PGA-ME | $6.53 \times 10^4$ | 0.38 | 1621 | | sep-CMA-MAE | $1.84 \times 10^5$ | 0.64 | 2326 | | QDPG | $6.59 \times 10^4$ | 0.35 | 1490 | | CMA-MAEGA (TD3, ES) | $3.48 \times 10^4$ | 0.33 | 1025 | | | QD-Score | Cov | Best | |----------------|----------|-----|------| | Halfcheetah | | | | | PPGA | $7.26 \times 10^5$ | 0.58 | 8919 | | PGA-ME | $3.31 \times 10^5$ | 0.31 | 4644 | | sep-CMA-MAE | $6.03 \times 10^5$ | 0.61 | 228 | | QDPG | $2.53 \times 10^5$ | 0.28 | 359 | | CMA-MAEGA (TD3, ES) | $5.70 \times 10^5$ | 0.56 | 2438 | | | QD-Score | Cov | Best | |----------------|----------|-----|------| | Ant | | | | | PPGA | $1.53 \times 10^7$ | 0.34 | 7328 | | PGA-ME | $1.79 \times 10^7$ | 0.37 | 4571 | | sep-CMA-MAE | $1.16 \times 10^7$ | 0.30 | 1629 | | QDPG | $3.03 \times 10^6$ | 0.25 | 78 | | CMA-MAEGA (TD3, ES) | $3.08 \times 10^6$ | 0.12 | 1020 | Figure 6: Corrected CCDFs and Corrected QD metrics: QD-Score, Coverage, Best Reward. Results are averaged over four seeds with error bars showing a 95% bootstrapped confidence interval. We re-evaluate each agent in the archive 50 times and average its performance and measures to construct a Corrected Archive and use this to produce Corrected QD metrics such as QD-Score and Coverage. Fig. 6 shows the corrected QD metrics and the corrected CCDFs, respectively. After re-evaluation, PPGA maintains the lead in best reward on all tasks, QD-score on Humanoid and Ant, and Coverage on Humanoid. The CCDF plots of the Corrected Archives show PPGA producing better distributions of policies on all tasks but Ant, suggesting PPGA’s policies are robust to stochasticity. 5 DISCUSSION AND LIMITATIONS We present a new method, PPGA, which is one of the first QD-RL methods to leverage on-policy RL, the first to solve the challenging Humanoid task, and the first to achieve equivalent performance in best reward compared to standard RL on all domains. We show that DQD algorithms and on-policy RL have emergent synergies that make them work particularly well with each other. However, instead of simply combining DQD and on-policy RL as is, we re-examine the fundamental assumptions and mechanisms of each component and implement changes that maximize their synergies. There are some caveats with this approach. On-policy RL algorithms such as PPO are quite sample-inefficient and require many parallel environments per agent in order to compute the stochastic policy gradient. Although GPU acceleration and massive parallelism improve wall-clock convergence over off-policy RL, this makes our approach less sample-efficient than other off-policy QD-RL methods. Secondly, enabling PPO’s adaptive standard deviation parameter (which is true by default for PPO) can have detrimental effects on PPGA’s exploration capabilities, as made evident by the coverage score on Ant. This is mainly due to the fact that PPO favors collapsing the standard deviation to achieve higher average returns. In the future, we will investigate modifying the standard deviation parameter such that it dynamically shrinks or increases the standard deviation value based on the QD-optimization landscape as opposed to the RL one. Finally, we are interested to see how this method scales to even more data-rich regimes such as distributed settings, as well as its application to harder problems such as real robotics tasks. We leave these as potential avenues of future research. 6 REPRODUCIBILITY In the supplemental material, we provide the source code and training scripts used to produce our results. In the README, we include documentation for setting up a Conda environment, running our training scripts, and visualizing our results. In addition, we provide pre-trained archives whose results were presented in this work. Detailed pseudocode and a list of relevant hyperparameters can be found in Appendices A and B. REFERENCES Youhei Akimoto, Yuichi Nagata, Isao Ono, and Shigenobu Kobayashi. Bidirectional relation between cma evolution strategies and natural evolution strategies. In Robert Schaefer, Carlos Cotta, Joanna Kołodziej, and Günter Rudolph (eds.), Parallel Problem Solving from Nature, PPSN XI, pp. 154–163, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg. ISBN 978-3-642-15844-5. Sumeet Batra, Zhehui Huang, Aleksei Petrenko, Tushar Kumar, Artem Molchanov, and Gaurav S. Sukhatme. Decentralized control of quadrotor swarms with end-to-end deep reinforcement learning. In Aleksandra Faust, David Hsu, and Gerhard Neumann (eds.), Conference on Robot Learning, 8-11 November 2021, London, UK, volume 164 of Proceedings of Machine Learning Research, pp. 576–586. PMLR, 2021. URL https://proceedings.mlr.press/v164/batra22a.html Konstantinos Chatzilygeroudis, Antoine Cully, Vassilis Vassiliades, and Jean-Baptiste Mouret. Quality-Diversity Optimization: A Novel Branch of Stochastic Optimization, pp. 109–135. Springer International Publishing, Cham, 2021. ISBN 978-3-030-66515-9. doi: 10.1007/978-3-030-66515-9_4. URL https://doi.org/10.1007/978-3-030-66515-9_4 Tae Jong Choi and Julian Togelius. Self-referential quality diversity through differential map-elites. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’21, pp. 502–509, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383509. doi: 10.1145/3449639.3459383. URL https://doi.org/10.1145/3449639.3459383 Cédric Colas, Vashisht Madhavan, Joost Huizinga, and Jeff Clune. Scaling map-elites to deep neuroevolution. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference, GECCO ’20, pp. 67–75, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450371285. doi: 10.1145/3377930.3390217. URL https://doi.org/10.1145/3377930.3390217 Edoardo Conti, Vashisht Madhavan, Felipe Petroski Such, Joel Lehman, Kenneth Stanley, and Jeff Clune. Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 5027–5038. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7750-improving-exploration-in-evolution-strategies-for-deep-reinforcement-learning.pdf Antoine Cully, Jeff Clune, Danesh Tarapore, and Jean-Baptiste Mouret. Robots that can adapt like animals. Nat., 521(7553):503–507, 2015. doi: 10.1038/nature14422. URL https://doi.org/10.1038/nature14422 Sam Earle, Justin Snider, Matthew C. Fontaine, Stefanos Nikolaidis, and Julian Togelius. Illuminating diverse neural cellular automata for level generation. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’22, pp. 68–76, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450392372. doi: 10.1145/3512290.3528754. URL https://doi.org/10.1145/3512290.3528754 Manon Flageat, Felix Chalumeau, and Antoine Cully. Empirical analysis of pga-map-elites for neuroevolution in uncertain domains. ACM Trans. Evol. Learn. Optim., Jan 2023. ISSN 2688-299X. doi: 10.1145/3577203. URL https://doi.org/10.1145/3577203 Just Accepted. Matthew Fontaine and Stefanos Nikolaidis. Covariance matrix adaptation map-annealing. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’23, pp. 456–465,
jPzysuGAwl
If this is the case, the rationale behind proposing PTDT-online warrants further clarification, as one might expect the incorporation of online data to enhance, rather than diminish, the model's efficacy.
Prompt-Tuning Decision Transformer with Preference Ranking Anonymous authors Paper under double-blind review Abstract Prompt-tuning has emerged as a promising method for adapting pre-trained models to downstream tasks or aligning with human preferences. Prompt learning is widely used in NLP but has limited applicability to RL due to the complex physical meaning and environment-specific information contained within RL prompts. Directly extending prompt-tuning approaches to RL is challenging because RL prompts guide agent behavior based on environmental modeling and analysis, rather than adjusting the prompt format for downstream tasks widely used in NLP. In this work, we propose the Prompt-Tuning DT algorithm to address these challenges by using trajectory segments as prompts to guide RL agents in acquiring environmental information and optimizing prompts via black-box tuning to enhance their ability to contain more relevant information, thereby enabling agents to make better decisions. Our approach involves randomly sampling a Gaussian distribution to fine-tune the elements of the prompt trajectory and using the preference ranking function to find the optimization direction, thereby providing more informative prompts and guiding the agent toward specific preferences in the target environment. Extensive experiments show that with only 0.03% of the parameters learned, Prompt-Tuning DT achieves comparable or even better performance than full-model fine-tuning in the few-shot settings. Our work contributes to the advancement of prompt-tuning approaches in RL, providing a promising direction for optimizing pre-trained large-scale RL agents for specific preference tasks. 1 Introduction Pre-trained large-scale models (PLMs) (Brown et al., 2020; Devlin et al., 2018; Lee et al., 2022; Reed et al., 2022) have proven to be highly effective for a wide range of tasks due to their high transferability and competitive performance on downstream tasks with limited target data. However, full-model fine-tuning requires updating and storing all the parameters of the PLM, which is memory-intensive and impractical for maintaining a separate set of parameters for each task during inference. Recently, prompt-tuning (Li & Liang, 2021; Shin et al., 2020) has emerged as a promising alternative to full-model fine-tuning, allowing for the effective adaptation of pre-trained models to specific downstream tasks and human preferences. By freezing the pre-trained model’s parameters and tuning only the prompts, prompt-tuning approaches have demonstrated comparable performance with full-model fine-tuning methods across various model scales and tasks (Liu et al., 2021; Lester et al., 2021; Zhong et al., 2021). Offline Reinforcement Learning (offline RL) is a data-driven approach that learns an optimal policy from trajectories collected by a set of behavior policies, without requiring access to the environments. This approach is critical in many settings, where online interactions are expensive or dangerous. However, offline RL struggles with generalization to unseen tasks and adapting to preferences, as the agent may not find a good policy in the test tasks due to the distribution shift. Recent works address this challenge through offline meta-RL, which leverages the algorithmic learning perspective (Mitchell et al., 2021; Nichol et al., 2018; Rajeswaran et al., 2019). In contrast, we aim to investigate the power of prompt-tuning methods with PLMs. Nonetheless, unlike natural language processing (NLP) prompts, RL prompts are more complex and contain environment-specific information, which may be vulnerable to the prompt learning process. Additionally, prompt-tuning approaches from NLP cannot be directly applied to RL prompts, as RL prompts guide agent behavior based on environmental modeling and analysis rather than adjusting the prompt format for downstream tasks. Figure 1: Application of Prompt-Tuning DT. At each iteration, the Pre-trained Large Model generates different trajectories for the current task based on different prompts, which are generated by perturbing the initial prompt with random Gaussian distribution, and the most recent K-step history. The generated trajectories are then ranked based on a specific property using a preference ranking oracle, and the ranking information is leveraged to update the prompt. Therefore, there is an urgent need to develop novel prompt-tuning techniques specifically tailored to RL that can guide agents toward specific preferences in the target environment. In this paper, we propose a novel algorithm called Prompt-Tuning Decision Transformer (Prompt-Tuning DT) as an approach to tackle the challenge of generalization in offline RL from the perspective of prompt tuning. Our approach leverages trajectory segments as prompts to guide RL agents in acquiring target environmental information and optimizes prompts via black-box tuning to enhance their ability to contain more meaningful information. Prompt-tuning is essential in RL as it enables pre-trained agents to make better decisions by providing more informative prompts. This contrasts with the limitations inherent in straightforward prompt-based adaptation methods (Xu et al., 2022): The process of generating high-quality trajectory prompts involves significant investments of time and resources, and the prompt’s effectiveness is constrained by the model’s input capacity for conditioning prompts (Lester et al., 2021). As a result, despite the progress made in prompt-based adaptation, downstream task quality still lags far behind that of full-model fine-tuned methods. In our prompt-tuning offline RL framework, we first pre-train the agent using offline trajectories from various tasks within the same environment. For each task, the agent learns to predict a target trajectory by conditioning on a trajectory prompt sampled from the same task. During the evaluation, the agent is presented with a new task and a small set of new trajectories for fine-tuning the prompt (total step length not exceeding 10). Our approach perturbs each element of the prompt by randomly sampling from a Gaussian distribution to avoid catastrophic deviations and employs a preference ranking function along with a ranking algorithm to determine the optimization direction. Notably, by optimizing a mere 0.03% of the model parameters, Prompt-Tuning DT achieves performance on par with full-model fine-tuning and surpasses alternative parameter-efficient methods. Our work contributes to the advancement of prompt-tuning approaches in RL, providing a promising direction for optimizing PLMs for specific preferences and downstream tasks. In summary, our main contributions are three-fold: • We propose Prompt-Tuning DT, a memory-efficient alternative to fine-tuning pre-trained agents that achieves comparable performance with full-model fine-tuning methods. • We present a prompt-tuning RL framework, which leverages a PLM’s API to enable streamlined customization for specific preferences with minimal parameter modifications. • By optimizing with preference ranking, Prompt-Tuning DT outperforms strong meta offline RL baselines, demonstrating its effectiveness as a few-shot learner for generalization in offline RL. 2 RELATED WORK Offline RL. Offline RL has emerged as a promising paradigm for learning from fixed, limited datasets consisting of trajectory rollouts from arbitrary policies (Levine et al., 2020). However, deploying off-policy RL algorithms directly in the offline setting can be challenging due to the distributional shift problem, which can result in a significant performance drop (Fujimoto et al., 2019). To overcome this issue, model-free RL algorithms adopt various strategies, such as constraining the action space of the policy (Kumar et al., 2019) or introducing pessimism to the value function. (Kumar et al., 2020), to explicitly correct the distributional mismatch between the behavior policy in the offline data and the optimized policy. In contrast, model-based RL algorithms estimate the environmental reward and transition functions using offline data and require modifications to the RL algorithm to avoid exploiting errors in the estimated model (Kidambi et al., 2020; Yu et al., 2020b). Offline RL has been increasingly viewed as a sequence modeling task, and Transformer-based decision models have been applied to this domain. The objective is to predict the next sequence of actions based on the sequence of recent experiences, which includes state-action-reward triplets. This approach can be trained using supervised learning, which makes it more suitable for offline RL and imitation learning. Several studies have explored the use of Transformers in RL under the sequence modeling paradigm, including Gato (Reed et al., 2022), Generalized DT (Furuta et al., 2021), Graph DT (Hu et al., 2023), and the survey works (Hu et al., 2022; Li et al., 2023). In this work, we propose a novel approach that is based on Prompt-DT (Xu et al., 2022) and incorporates prompt-tuning techniques to enhance its performance on downstream target tasks. **Meta RL.** Meta-learning algorithms enable efficient learning of new tasks by learning the process of learning itself. In the context of meta-RL, the objective is to generalize an agent’s knowledge from one task to another. In recent years, several studies have delved into the problem of offline meta-RL. Li et al. (2020) address a scenario where task identity is spuriously inferred due to biased datasets and apply the triplet loss to robustify the task inference with reward relabelling. Dorfman et al. (2021) extend the online Meta-RL method VariBAD (Zintgraf et al., 2019) to the offline setup, where they assume known reward functions for each task and use reward relabelling to share data across tasks with shared dynamics. On the other hand, Mitchell et al. (2021) propose an offline Meta-RL algorithm based on MAML (Finn et al., 2017). Their approach includes an advantage weighting loss and introduces a policy update in the inner loop to theoretically increase the richness of the policy’s update and empirically improve adaptation performance and stability. In this article, we investigate an alternative perspective on meta-RL using sequence modeling and prompt engineering, which can achieve comparable or superior performance compared to traditional methods. **Prompt Learning.** Prompt learning is a promising methodology in NLP that involves optimizing a small subset of parameters while leaving the main model architecture unchanged. The basic premise of prompt learning involves presenting the model with a cloze test-style textual prompt, which the model is then expected to fill in with the corresponding answer. Autoprompt (Shin et al., 2020) proposes an automatic prompt search methodology for efficiently finding optimal prompts, while recent advancements in prompt learning have adopted trainable continuous embeddings for prompt representation (Li & Liang, 2021; Lester et al., 2021). Prompt learning has also been applied to the vision-language domain, where introducing continuous prompts into pre-trained vision-language models has led to significant improvements in few-shot visual recognition and generalization performance (Zhou et al., 2022b,a). While prompt learning reduces the number of tunable parameters, back-propagation through the entire model is still necessary to calculate gradients and update the small subset of parameters. Gradient-free methods have been proposed to optimize continuous (Sun et al., 2022) or discrete (Prasad et al., 2022) prompts. Despite the great success of prompt-tuning in the fields of NLP and CV, its application in RL has not been thoroughly explored. Therefore, in this study, we propose the Promp-Tuning DT method that employs gradient-free methods to optimize continuous trajectory prompts with a preference ranking oracle. This approach can be extended to a human-in-the-loop environment, where candidate prompts are ranked manually. ### 3 Preliminary In this section, we provide a concise overview of the key components of our algorithm, namely decision transformer and ranking optimization. Decision transformer extends the Transformer model to offline RL by framing RL problems as sequence modeling tasks, paving the way for the potential emergence of large-scale RL models. We also introduce a ranking optimization approach that utilizes ranking data to optimize the model without explicitly calculating gradient information. These algorithms form the basis of our approach illustrated in Section 4. #### 3.1 Decision Transformer Transformer (Vaswani et al., 2017), extensively studied in NLP (Devlin et al., 2018) and CV (Dosovitskiy et al., 2020), has also been explored in RL using the sequence modeling pattern (Hu et al., Moreover, Recent works from NLP suggest Transformers pre-trained on large-scale datasets are capable of few-shot or zero-shot learning under the prompt-based framework (Liu et al., 2023; Brown et al., 2020). Building upon this, Gato (Reed et al., 2022) and TTP (Jain et al., 2023) both extend the prompt-based framework to the offline RL setting, constructing pre-trained large-scale agents designed to address multiple tasks concurrently in a zero-shot or few-shot fashion. Both methods are based on the Decision Transformer (Chen et al., 2021), which treats learning a policy as a sequence modeling problem. DT introduces the notion of modeling trajectories through state \( s_t \), action \( a_t \), and reward-to-go \( \hat{r}_t \) tuples collected at distinct time steps \( t \). The reward-to-go token quantifies the cumulative reward from the current time step to the end of the episode. During training with offline collected data, DT processes a trajectory sequence \( \tau_t \) in an auto-regressive manner which encompasses the most recent K-step historical context. \[ \tau_t = (\hat{r}_{t-K+1}, s_{t-K+1}, a_{t-K+1}, \ldots, \hat{r}_t, s_t, a_t). \] The prediction head associated with a state token \( s_t \) is trained to predict the corresponding action \( a_t \). Regarding continuous action spaces, the training objective is to minimize the mean-squared loss: \[ L_{DT} = \mathbb{E}_{\tau_t \sim \mathcal{T}} \left[ \frac{1}{K} \sum_{i=t-K+1}^{t} (a_i - \pi(\tau_t)_i)^2 \right]. \] ### 3.2 Ranking Optimization Black-box optimization, which utilizes a derivative-free framework to optimize the target function, has been extensively studied in the optimization literature for several decades (Frazier, 2018; Nesterov & Spokoiny, 2017; Loshchilov & Hutter, 2016). With the rapid development of Reinforcement Learning with Human Feedback (RLHF), ranking data, which enables humans to express their personal preferences in a straightforward and intuitive manner (Bai et al., 2022; Ouyang et al., 2022), has demonstrated great potential for use in various applications, especially those where the exact value of personal information is sensitive, such as healthcare or finance. ZO-RankSGD (Cai et al., 2022; Bergou et al., 2020; Tang et al., 2023) is an effective approach for model optimization that finds the descent direction directly from the ranking information, without the need for knowledge of the gradient of the model or the exact value of the data. Given a ranking function \( f : \mathbb{R}^d \rightarrow \mathbb{R} \) and \( m \) points \( x^1, \ldots, x^m \) to query, an \((m,k)\) ranking oracle \( O_f^{(m,k)} \) will return \( k \) smallest points sorted in their order. With the ranking oracle \( O_f^{(m,k)} \) and a starting point \( x \), we can query \( O_f^{(m,k)} \) with the inputs \( x^i = x + \mu \xi_i, \quad \xi_i \sim \mathcal{N}(0, I_d), \) for \( i = 1, \ldots, m, \) and \( \mu \) is a constant. With the directed acyclic graph (DAG) \( G = (\mathcal{N}, \mathcal{E}) \) constructed from the ranking information of \( O_f^{(m,k)} \), where the node set \( \mathcal{N} = \{1, \ldots, m\} \) and the directed edge set \( \mathcal{E} = \{(i,j) | f(x_i) < f(x_j)\} \), the rank-based gradient estimator can be formulated as follows: \[ \tilde{g}(x) = \frac{1}{|\mathcal{E}|} \sum_{(i,j) \in \mathcal{E}} \frac{x^j - x^i}{\mu} = \frac{1}{|\mathcal{E}|} \sum_{(i,j) \in \mathcal{E}} (\xi_j - \xi_i). \] Then the point can be updated with \( x_{\text{new}} = x - \eta \tilde{g}(x) \), where \( \eta \) is the learning rate and \( \tilde{g}(x) \) is the estimated gradient. With the help of preference ranking oracle and ZO-RankSGD algorithm, we are able to optimize the prompt guiding the agent towards human preferences in the target environment. ### 4 Prompt-Tuning Decision Transformer This section introduces prompt-tuning as a memory-efficient alternative to full-model fine-tuning for the pre-trained agents in the context of few-shot policy generalization tasks. We begin by presenting the problem formulation in Section 4.1 and subsequently provide a formal definition of our method in Section 4.2. The overall procedure of our proposed Prompt-Tuning DT is illustrated in Figure 1. #### 4.1 Problem Formulation In our few-shot evaluation experiments, our objective is to align the output of the PLM with human preferences using a restricted number of offline trajectories and limited oracle calls, all accomplished in a parameter-efficient manner. To better quantitatively evaluate our method, we adopt high cumulative reward, a widely-used indicator in the field of RL, as a representation of human preference and conduct experiments on few-shot generalization tasks, which involve training the agent on a set of tasks using offline data and evaluating its ability to generalize to new tasks. There are two distinct sets of tasks, denoted as \( T_{\text{train}} \) and \( T_{\text{test}} \), ensuring that there is no overlap between them (\( T_{\text{train}} \cap T_{\text{test}} = \emptyset \)). This arrangement requires the model to perform well on tasks with goals that lie outside the training range, thereby necessitating the ability to generalize to out-of-distribution tasks. Each task \( T_i \) in the training set \( T_{\text{train}} \) is associated with a corresponding dataset \( D_i \), which consists of pre-collected trajectories obtained using an unknown behavior policy \( \pi_i \). For a test task \( T_i \in T_{\text{test}} \), there are two possible approaches to adapt to the new domain. One approach involves updating the model parameters using task-specific offline data \( P_i \), which is usually much less than the training dataset \( |P_i| << |D_i| \). Alternatively, one can incorporate task-specific prompts derived from \( P_i \) to mitigate the issue of distribution shift, although such approaches are generally considered inferior to fine-tuning methods (Brown et al., 2020). Our method combines the advantages of both approaches to fine-tune the prompts. **Algorithm 1 Prompt-Tuning DT** Require: Initial prompt \( \tau_0^* \), stepsize \( \eta \), iterations \( T \), smoothing parameter \( \mu \). 1: Construct the initial point from prompt: \( x_0 = \hat{r}_1^* || s_1^* || a_1^* || \ldots || \hat{r}_{K^*}^* || s_{K^*}^* || a_{K^*}^* \). 2: for \( t = 1 \) to \( T \) do 3: Sample \( m \) i.i.d. random vectors \( \{\xi(t,1), \ldots, \xi(t,m)\} \) from \( N(0, I_d) \). 4: Query the ranking function to obtain the exact value with offline loss function or online reward function with input \( \{x_{t-1} + \mu\xi(t,1), \ldots, x_{t-1} + \mu\xi(t,m)\} \). 5: Construct the corresponding DAG \( G = (N, E) \) as described in Section 3.2. 6: Compute the gradient estimator using: \( g_t = \frac{1}{|E|} \sum_{(i,j) \in E} (\xi(t,j) - \xi(t,i)) \). 7: \( x_t = x_{t-1} - \eta g_t \). 8: end for ### 4.2 Deep Black-Box Tuning Trajectory prompts contain only the necessary information to aid in task identification while being insufficient for the agent to imitate, thus the length of the prompt \( K^* \) is usually less than 10. Each trajectory prompt contains multiple tuples of state \( s^* \), action \( a^* \) and reward-to-go \( \hat{r}^* \), denoted as \( (s^*, a^*, \hat{r}^*) \), following the representation in Chen et al. (2021) and Xu et al. (2022). Each element with superscript \( ^* \) is associated with the trajectory prompt, which can be formulated as: \[ \tau^* = (\hat{r}_1^*, s_1^*, a_1^*, \ldots, \hat{r}_{K^*}^*, s_{K^*}^*, a_{K^*}^*) \] In contrast to the prompt learning approach typically employed in NLP, where a cloze test-style textual prompt is presented to the model for filling in the corresponding answer, the trajectory prompt utilized in the Decision Transformer consists of tokens that have unique representations and physical interpretations. These tokens are carefully crafted to represent essential components of RL tasks, including the state, action, and return-to-go. The state token encapsulates the environmental information of the agent at a given position and is usually represented by a high-dimensional vector. On the other hand, the action token exhibits significant variations across dimensions, with specific values corresponding to distinct actions. Moreover, the return-to-go token serves to denote the expected reward that we aim for the agent to attain. Given these distinct characteristics of RL prompts, directly applying prompt-tuning approaches from NLP becomes challenging: RL prompts are specifically tailored to guide agent behavior by leveraging environmental modeling and analysis, rather than primarily focusing on adjusting the prompt format as in NLP prompt learning. We utilize the ZO-RankSGD optimization approach to update the trajectory prompt. This method avoids explicit gradient computation and eliminates the necessity for intricate understanding of the particular structure of the PLM. Given the initial trajectory prompt \( \tau_0^* \), we concatenate one trajectory segment as a unit and add a standard Gaussian distribution to it to avoid catastrophic deviations: \[ x_0 = \hat{r}_1^* || s_1^* || a_1^* || \ldots || \hat{r}_{K^*}^* || s_{K^*}^* || a_{K^*}^*, \] \[ x_i^* = x_0 + \mu\xi_i, \quad \xi_i \sim N(0, I_d), \] where \( || \) means concatenation, \( \hat{r}_i^* \in \mathbb{R}^{d_r}, s_i^* \in \mathbb{R}^{d_s}, a_i^* \in \mathbb{R}^{d_a} \), and \( d_x = (d_r + d_s + d_a) \times K^* \). For the ranking function $f$, we propose two preference ranking functions tailored to different RL environments (offline and online): offline loss function and online reward function. For the offline setting, where we have access to a set of trajectories $\mathcal{P}$ collected in advance, we utilize the MSE loss between the true action and predicted action as the preference ranking function: $$f(x^i_0) = \mathbb{E}_{\tau_{\text{offline}} \sim \mathcal{P}} \left[ \frac{1}{T} \sum_{t=1}^{T} (a_t - \pi(x^i_0, \tau^{\text{offline}}_t))^2 \right]. \quad (6)$$ While for the online setting, where we can interact with the simulator, we consider the episode accumulated reward obtained by the model during online interactions as the preference ranking function, which is represented as follows: $$f(x^i_0) = -\mathbb{E}_{\tau_{\text{online}}} \left[ \sum_{t=1}^{\infty} R(s_t, \pi(x^i_0, \tau^{\text{online}}_t)) \right]. \quad (7)$$ Note that the function is optimized to the minimal, we need to add a minus sign in front of the equation. In both cases, the selection of the preference ranking function aims to adapt to the human preference for high cumulative reward, which also serves as a widely used metric for evaluating a pre-trained model’s performance. Then the ranking oracle $O_f^{(m,k)}$ simply returns the order of these values which is subsequently utilized for computing the gradient estimator. Human judgment can also be employed as an oracle to rank these trajectories based on individual preferences. However, this study does not delve into comprehensive experiments within human-in-the-loop settings, leaving this aspect for future investigations. In this context, we primarily showcase the algorithm’s potential in a human-in-the-loop framework from a design perspective: (1) Ranking information possesses a unique appeal to humans as it offers a straightforward and intuitive means to express personal preferences without the need for exact scores or ratings, making our approach user-friendly. (2) The forward-forward fine-tuning strategy proves advantageous in terms of conserving GPU memory, which bears significance for deployment on devices with limited resources. (3) Ranking-based approaches avoid intricate understanding of the PLM’s structure, where leveraging the PLM’s API enables effective prompt fine-tuning in alignment with human preferences. Collectively, these attributes render our method well-suited for human-in-the-loop environments. The primary objective of this article is to establish the method’s feasibility and efficiency. We summarize the entire procedure of prompt-tuning in Algorithm 1. Prompt-Tuning DT employs an approximate gradient calculation to adapt the pre-trained agent to specific preferences. Gaussian noise is introduced to the initial prompt, driving Prompt-Tuning DT to discover a more expressive prompt tailored to the target tasks. There are two options available for the ranking function. The offline loss function requires pre-collected datasets from the target tasks in $\mathcal{T}^{\text{test}}$, while the online reward function assumes interaction with a simulator for the target tasks in $\mathcal{T}^{\text{test}}$. After $T$ iterations of the fine-tuning process, we utilize the optimized result to initialize the prompt $\tau^*$ at the onset of the evaluation stage and update the recent history $\tau$ with streamingly collected data. 5 EXPERIMENT We perform experiments to assess the aligning human preference of Prompt-Tuning DT by using the episode accumulated reward as the evaluation metric. Our experimental evaluation seeks to answer the following research questions: (1) Can the prompt-tuning approach achieve comparable performance to full-model fine-tuning with limited ranking oracle calls? (2) What is the impact of the fine-tuning dataset size on the effectiveness of the prompt-tuning approach? (3) How does the quality and quantity of the prompt influence the effectiveness of the prompt-tuning approach? (4) How does the hyper-parameter influence the effectiveness of the prompt-tuning approach? 5.1 DATASETS AND TASKS In this study, we assess the performance of our proposed approach on several datasets that are used in meta-RL (Finn et al., 2017; Rothfuss et al., 2018; Mitchell et al., 2021; Yu et al., 2020a), namely Cheetah-dir, Cheetah-vel, Ant-dir, and Meta-World reach-v2. The objectives of these tasks are Table 1: Results for meta-RL control tasks. The best mean scores are highlighted in bold. Each environment has prompts of length $K^* = 5$ and limited fine-tuned datasets. Scores are normalized so that 100 represents an expert policy. Notably, our methods outperform other parameter-efficient methods on almost all tasks and even achieve comparable performance with Full-Model-FT method. | Environment | PLM | PDT | Soft-Prompt | Adaptor | PTDT-offline | PTDT-online | Full-Model-FT | |-----------------|-----------|-----------|-------------|------------|--------------|-------------|---------------| | **Random Prompt Initialization** | | | | | | | | | Cheetah-dir | -3.8 ± 0.3| 94.7 ± 0.0| 95.6 ± 0.3 | 74.8 ± 2.0 | 95.5 ± 0.0 | 95.1 ± 0.6 | 93.3 ± 1.0 | | Cheetah-vel | 7.1 ± 0.9 | 44.2 ± 0.1| 44.6 ± 0.1 | 19.7 ± 4.6 | 61.2 ± 3.2 | 60.5 ± 7.9 | 44.0 ± 9.8 | | Ant-dir | 24.7 ± 1.4| 61.7 ± 0.2| 66.5 ± 0.4 | 75.3 ± 6.6 | 75.3 ± 5.9 | 78.7 ± 0.2 | 77.5 ± 1.7 | | MW reach-v2 | 45.5 ± 1.0| 42.1 ± 5.8| 41.8 ± 5.8 | 0.3 ± 0.1 | 54.0 ± 2.2 | 49.9 ± 4.4 | 43.9 ± 13.0 | | **Average** | 18.4 | 60.7 | 62.1 | 41.2 | 71.5 | 71.0 | 64.7 | | **Expert Prompt Initialization** | | | | | | | | | Cheetah-dir | -3.8 ± 0.3| 94.6 ± 0.5| 95.6 ± 0.1 | 75.5 ± 0.9 | 95.5 ± 0.1 | 95.4 ± 0.1 | 93.6 ± 0.7 | | Cheetah-vel | 7.1 ± 0.9 | 86.0 ± 1.4| 86.4 ± 1.2 | 27.1 ± 4.5 | 87.8 ± 0.4 | 87.5 ± 0.1 | 81.9 ± 0.5 | | Ant-dir | 24.7 ± 1.4| 71.3 ± 0.3| 72.7 ± 0.3 | 76.8 ± 6.6 | 75.5 ± 0.4 | 74.5 ± 0.3 | 84.2 ± 1.6 | | MW reach-v2 | 45.5 ± 1.0| 50.9 ± 6.6| 50.9 ± 6.6 | 0.3 ± 0.1 | 56.5 ± 1.0 | 53.1 ± 3.0 | 54.2 ± 8.6 | | **Average** | 18.4 | 75.7 | 76.4 | 43.1 | 78.8 | 77.6 | 78.5 | distinct, with Cheetah-dir and Ant-dir incentivizing high velocity in the goal direction, Cheetah-vel penalizing deviations from the target velocity using $l_2$ errors, and Meta-World reach-v2 task requiring the robot’s end-effector to reach a designated position in 3D space. These tasks require the agent to move forward as far and quickly as possible. Detailed environment and hyper-parameter settings refer to Appendix A and B. We adopt the dataset construction and settings from (Mitchell et al., 2021) for the meta-RL control tasks considered in this study. Specifically, the datasets comprise the full replay buffer of Soft Actor-Critic (Haarnoja et al., 2018) for Cheetah-dir and Ant-dir, and TD3 (Fujimoto et al., 2018) for Cheetah-vel. Expert trajectories are collected for Meta-World reach-v2 (Yu et al., 2020a) using script expert policies provided in the environment. ### 5.2 Baselines We assess the few-shot generalization capabilities of Prompt-Tuning DT by comparing it against five baseline methods across meta-RL control tasks. Our approach begins with pre-training a PLM on the training tasks $\mathcal{T}^{train}$ using the DT architecture. Subsequently, this PLM is directly applied for inference on the test tasks $\mathcal{T}^{test}$ without any additional parameters, denoted as “PLM” in Table 1. We then explore four additional few-shot learning methods: (1) PDT (Xu et al., 2022) involves the collection of prompts from the target tasks $\mathcal{T}^{test}$ and incorporates these prompts into the input to aid the PLM in better adapting to the target tasks, which is known as straightforward prompt-based adaptation. (2) Soft-Prompt treats these collected prompts as soft embeddings and employs a separate optimizer to fine-tune these embeddings, akin to prevalent practices in the NLP domain (Lester et al., 2021). (3) Adaptor refers to a widely-used parameter-efficient technique previously explored in HDT (Xu et al., 2023). We integrate adaptors, following HDT’s methodology, into each decoder module of the PLM. During inference, only the adaptors are fine-tuned to enhance the PLM’s adaptation to the target tasks. (4) We also consider the full-model fine-tuning method, which serves as an upper performance bound for fine-tuning techniques in scenarios with complete data (Li & Liang, 2021). This breadth allows for a thorough evaluation of the efficacy of our methods. Our approach encompasses two distinct variations: Prompt-Tuning DT with an offline loss function (PTDT-offline) and Prompt-Tuning DT with an online reward function (PTDT-online). To maintain fairness in the comparison, all offline methodologies are confined to utilizing an equivalent quantity of samples $\mathcal{P}_i$ sourced from the target task $\mathcal{T}^{test}$. While PTDT-online involves interaction with a simulator, we meticulously regulate the number of interactions to guarantee access to new trajectories of comparable magnitudes. Note that all methods utilize the same PLM to ensure an equitable comparison. By aligning these experimental setups, we establish a robust foundation for an unbiased assessment of the method’s performance, thereby enhancing the validity of our findings. ### 5.3 Main Results We perform a comparative analysis between Prompt-Tuning DT and the parameter-efficient baseline methods to assess their few-shot generalization abilities and evaluate the tuning efficiency of Prompt-Tuning DT in relation to the full-model fine-tuning approach. We use the episode accumulated reward as the evaluation metric in the test task set $T_{test}$. Note that we present two prompt initialization settings: the random prompt, gathered from a random policy, and the expert prompt, acquired from the expert policy. The main results are normalized and presented in Table 1, which showcases the few-shot performance of various algorithms (more details are presented in Appendix). The outcomes from the PLM underscore the significance of prompts, as PLM struggles to adapt to target tasks in zero-shot scenarios. During the random prompt initialization setting, PDT effectively leverages pre-collected prompts by incorporating them into PLM input, resulting in substantial improvements. Adaptor also exhibits enhanced performance over PLM by introducing supplementary fine-tuning adaptors within decoder modules. However, its efficacy is hampered, particularly in the MW reach-v2 and cheetah-vel environments, likely due to limited target datasets $\mathcal{P}_a$. Both Soft-Prompt and our proposed approach undertake further fine-tuning of the prompt itself. While Soft-Prompt treats the prompt as an embedding and optimizes it using the AdamW optimizer, it achieves better results than PDT but lags behind our approach. This discrepancy can be attributed to the intricate and environment-specific nature of RL prompts, which necessitate meticulous updates (visualized in Appendix C). Our approach demonstrates effectiveness in both offline and online settings, yielding notable performance improvements that surpass the benchmark of Full-Model-FT, which serves as a primary reference for evaluating the efficacy of our approach. Under the expert prompt initialization setting, characterized by strong prior knowledge, all baseline approaches exhibit substantial enhancements in comparison to their counterparts in the random initialization setting. Moreover, the relative improvement of our method, compared to other approaches, diminishes under the expert prompt regime. This reduction can be attributed to the strong prior condition introduced by expert trajectories, which constrains the extent of improvement across methods. Nevertheless, despite this limitation, our approach consistently outperforms all other parameter-efficient methods. Furthermore, our method achieves outcomes on par with Full-Model-FT. Collectively, these outcomes accentuate the effectiveness of our proposed prompt-tuning techniques across both random and expert prompt initialization scenarios. 5.4 ABLATION Random Search. Considering the random search could lead to a lot of variability in the performance of the algorithm, we further investigate the impact of the number $m$ and variance $\mu$ of Gaussian random variables. The results are shown in Figure 2(a). When we increase the number of samples $m$ during each update, the algorithm explores a larger set of possible directions to evaluate the performance, leading to a more accurate gradient estimation (variance decrease). However, as $m$ increases, the burden on the oracle, which needs to provide ranking information for the $m$ samples, also grows. On the other hand, increasing the variance of the Gaussian distribution $\mu$ allows the algorithm to explore a broader range of potential directions for performance evaluation. However, a higher variance of the Gaussian $\mu$ also introduces larger variability in the gradient estimation, which may not consistently guarantee performance improvement. Prompt Length. We investigate the impact of prompt length on the prompt-tuning methods, considering its influence on both the number of tuning parameters in the approach and the speed of inference. It is crucial to strike a balance between the richness of information provided by the prompt and the effectiveness of the prompt-tuning process. The results are shown in Figure 2(b). Our primary focus is on investigating Soft-Prompt and PTDT-offline. We employ PDT as the baseline, which does not involve additional fine-tuning processes. The augmentation of the prompt does not uniformly lead to enhanced generalization performance for both PDT and Soft-Prompt. This trend underscores the efficacy of our approach in adeptly refining prompts, even when confronted with a greater number of tuning parameters. Sample Efficiency. We explore the impact of progressively increasing the number of fine-tuning samples on the performance of fine-tuning approaches, aiming to understand the prompt-tuning methods’ dependence on the quantity of fine-tuning samples. Figure 3 illustrates the performance trends of these methods on the Cheetah-dir, Cheetah-vel, Ant-dir, and MetaWorld-reach-v2 environments. Prompt-tuning methods (PTDT, Soft-Prompt) exhibit consistent performance across varying sample sizes, whereas model-tuning methods (Adaptor, Full-Model-FT) exhibit incremental improvements as the number of samples increases. Figure 2: (a) Ablation analysis concerning the influence of parameters $m$ and $\mu$ for our PTDT-offline approach. (b) Ablation investigation into the effect of prompt length for the prompt-tuning methodologies. The experiments are conducted within the Ant-dir environment, and the reported results are averaged across three independent seeds, with normalization applied. Figure 3: Comparison between different fine-tuning approaches when different numbers of training samples are available. The figures from left to right correspond to Cheetah-dir, Cheetah-vel, Ant-dir, and MetaWorld-reach-v2 environments respectively. Each plot is run with 3 seeds. The x-axis is the training size and the y-axis is the evaluation metric (higher is better). Furthermore, it is worth noting that unlike the observed phenomenon in NLP (Li & Liang [2021], Gu et al. [2021]), the performance of these fine-tuning approaches does not exhibit a monotonically increasing trend as the amount of fine-tuning data increases. In all environments, a downward inflection point is observed in the performance curve as the number of samples increases. This can be attributed to the presence of "bad samples" in the training dataset, which adversely impact the fine-tuning process and potentially result in catastrophic deviations. By utilizing a larger dataset, the proportion of "bad samples" decreases, enabling the fine-tuning to converge to more optimal policies and improve overall performance. 6 CONCLUSION In this paper, we introduce Prompt-Tuning Decision Transformer (Prompt-Tuning DT), a novel algorithm that aligns with human preferences in the target environment. By optimizing prompts without back-propagation, Prompt-Tuning DT offers a memory-efficient alternative to fine-tuning PLMs. Furthermore, our prompt-tuning offline RL framework using trajectory prompts allows for effective adaptation to new tasks with minimal parameter optimization and a small number of trajectories. Through extensive experiments, our approach achieves performance on par with full-model fine-tuning and surpasses alternative parameter-efficient methods. Our work contributes to the advancement of prompt-tuning approaches in RL, providing a promising direction for optimizing pre-trained large-scale RL agents for specific preferences and downstream tasks. Our approach demonstrates the potential of prompt-tuning methods in RL settings and opens up avenues for future research in developing tailored prompt-tuning techniques for RL agents. We envision that prompt-tuning approaches will continue to play a crucial role in enhancing the generalization and adaptability of RL agents in real-world scenarios. REFERENCES Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. *arXiv preprint arXiv:2204.05862*, 2022. El Houcine Bergou, Eduard Gorbunov, and Peter Richtárik. Stochastic three points method for unconstrained smooth minimization. *SIAM Journal on Optimization*, 30(4):2726–2749, 2020. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *NeurIPS*, 2020. HanQin Cai, Daniel McKenzie, Wotao Yin, and Zhenliang Zhang. A one-bit, comparison-based gradient estimator. *Applied and Computational Harmonic Analysis*, 60:242–266, 2022. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. *NeurIPS*, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Ron Dorfman, Idan Shenfeld, and Aviv Tamar. Offline meta reinforcement learning–identifiability challenges and effective data collection strategies. *NeurIPS*, 2021. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *ICML*, 2017. Peter I Frazier. A tutorial on bayesian optimization. *arXiv preprint arXiv:1807.02811*, 2018. Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In *ICML*, 2018. Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In *ICML*, 2019. Hiroki Furuta, Yutaka Matsuo, and Shixiang Shane Gu. Generalized decision transformer for offline hindsight information matching. *arXiv preprint arXiv:2111.10364*, 2021. Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang. Ppt: Pre-trained prompt tuning for few-shot learning. *arXiv preprint arXiv:2109.04332*, 2021. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In *ICML*, 2018. Shengchao Hu, Li Shen, Ya Zhang, Yixin Chen, and Dacheng Tao. On transforming reinforcement learning by transformer: The development trajectory. *arXiv preprint arXiv:2212.14164*, 2022. Shengchao Hu, Li Shen, Ya Zhang, and Dacheng Tao. Graph decision transformer. *arXiv preprint arXiv:2303.03747*, 2023. Vidhi Jain, Yixin Lin, Eric Undersander, Yonatan Bisk, and Akshara Rai. Transformers are adaptable task planners. In *CoRL*, 2023. Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. Morel: Model-based offline reinforcement learning. *NeurIPS*, 2020.
wPhbtwlCDa
Say, “$C(R)$ and $R$ differ by potential shaping and $S’$-distribution” in Definition 1, do you mean that we can get $C(R)$ by first potential shaping $R$ and then $S’$-distribution? Does the order matters?
STARC: A GENERAL FRAMEWORK FOR QUANTIFYING DIFFERENCES BETWEEN REWARD FUNCTIONS Joar Skalse Department of Computer Science Future of Humanity Institute Oxford University joar.skalse@cs.ox.ac.uk Lucy Farnik University of Bristol Bristol AI Safety Centre lucy.farnik@bristol.ac.uk Sumeet Ramesh Motwani Berkeley Artificial Intelligence Research University of California, Berkeley motwani@berkeley.edu Erik Jenner Berkeley Artificial Intelligence Research University of California, Berkeley jenner@berkeley.edu Adam Gleave FAR AI, Inc. adam@far.ai Alessandro Abate Department of Computer Science Oxford University aabate@cs.ox.ac.uk ABSTRACT In order to solve a task using reinforcement learning, it is necessary to first formalise the goal of that task as a reward function. However, for many real-world tasks, it is very difficult to manually specify a reward function that never incentivises undesirable behaviour. As a result, it is increasingly popular to use reward learning algorithms, which attempt to learn a reward function from data. However, the theoretical foundations of reward learning are not yet well-developed. In particular, it is typically not known when a given reward learning algorithm with high probability will learn a reward function that is safe to optimise. This means that reward learning algorithms generally must be evaluated empirically, which is expensive, and that their failure modes are difficult to anticipate in advance. One of the roadblocks to deriving better theoretical guarantees is the lack of good methods for quantifying the difference between reward functions. In this paper we provide a solution to this problem, in the form of a class of pseudometrics on the space of all reward functions that we call STARC (STAndardised Reward Comparison) metrics. We show that STARC metrics induce both an upper and a lower bound on worst-case regret, which implies that our metrics are tight, and that any metric with the same properties must be bilipschitz equivalent to ours. Moreover, we also identify a number of issues with reward metrics proposed by earlier works. Finally, we evaluate our metrics empirically, to demonstrate their practical efficacy. STARC metrics can be used to make both theoretical and empirical analysis of reward learning algorithms both easier and more principled. 1 INTRODUCTION To solve a sequential decision-making task with reinforcement learning or automated planning, we must first formalise that task using a reward function (Sutton & Barto, 2018; Russell & Norvig, 2020). However, for many tasks, it is extremely difficult to manually specify a reward function that captures the task in the intended way. To resolve this issue, it is increasingly popular to use reward learning, which attempts to learn a reward function from data. There are many techniques for doing this. For example, it is possible to use preferences between trajectories (e.g., Christiano et al., 2017), expert demonstrations (e.g., Ng & Russell, 2000), or a combination of the two (e.g., Ibarz et al., 2018). To evaluate a reward learning method, we must quantify the difference between the learnt reward function and the underlying true reward function. However, doing this is far from straightforward. A simple method might be to measure their $L_2$-distance. However, this is unsatisfactory, because two reward functions can have a large $L_2$-distance, even if they induce the same ordering of policies, or a small $L_2$-distance, even if they induce the opposite ordering of policies.\footnote{For example, given an arbitrary reward function $R$ and an arbitrary constant $c$, we have that $R$ and $c \cdot R$ have the same ordering of policies, even though their $L_2$-distance may be arbitrarily large. Similarly, for any $\epsilon$, we have that $\epsilon \cdot R$ and $-\epsilon \cdot R$ have the opposite ordering of policies, unless $R$ is constant, even though their $L_2$-distance may be arbitrarily small.} Another option is to evaluate the learnt reward function on a test set. However, this is also unsatisfactory, because it can only guarantee that the learnt reward function is accurate on a given data distribution, and when the reward function is optimised we necessarily incur a distributional shift (after which the learnt reward function may no longer match the true reward function). Yet another option is to optimise the learnt reward function, and evaluate the obtained policy according to the true reward function. However, this is also unsatisfactory, both because it is very expensive, and because it makes it difficult to separate issues with the policy optimisation process from issues with the reward learning algorithm. Moreover, because this method is purely empirical, it cannot be used for theoretical work. These issues make it challenging to evaluate reward learning algorithms in a way that is principled and robust. This in turn makes it difficult to anticipate in what situations a reward learning algorithm might fail, or what their failure modes might look like. It also makes it difficult to compare different reward learning algorithms against each other, without getting results that may be heavily dependent on the experimental setup. These issues limit the applicability of reward learning in practice. In this paper, we introduce STAndardised Reward Comparison (STARC) metrics, which is a family of pseudometrics that quantify the difference between reward functions in a principled way. Moreover, we demonstrate that STARC metrics enjoy strong theoretical guarantees. In particular, we show that STARC metrics induce an upper bound on the worst-case regret that can be induced under arbitrary policy optimisation, which means that a small STARC distance guarantees that two reward functions behave in a similar way. Moreover, we also demonstrate that STARC metrics induce a lower bound on worst-case regret. This has the important consequence that any reward function distance metric which induces both an upper and a lower bound on worst-case regret must be bilipschitz equivalent to STARC metrics, which in turn means that they (in a certain sense) are unique. In particular, we should not expect to be able to improve on them in any substantial way. In addition to this, we also evaluate STARC metrics experimentally, and demonstrate that their theoretical guarantees translate into compelling empirical performance. STARC metrics are cheap to compute, which means that they can be used for empirical evaluation of reward learning algorithms. Moreover, they can be calculated from a closed-form expression, which means that they are also suitable for use in theoretical analysis. As such, STARC metrics enable us to evaluate reward learning methods in a way that is both easier and more theoretically principled than relevant alternatives. Our work thus contributes towards building a more rigorous foundation for the field of reward learning. 1.1 Related Work There are two existing papers that study the problem of how to quantify the difference between reward functions. The first is Gleave et al. (2020), which proposes a distance metric that they call Equivalent-Policy Invariant Comparison (EPIC). They show that the EPIC-distance between two reward functions induces a regret bound for optimal policies. The second paper is Wulfe et al. (2022), which proposes a distance metric that they call Dynamics-Aware Reward Distance (DARD). Unlike EPIC, DARD incorporates information about the transition dynamics of the environment. This means that DARD might give a tighter measurement, in situations where the transition dynamics are known. Unlike Gleave et al. (2020), they do not derive any regret bound for DARD. Our work extends the work by Gleave et al. (2020) and Wulfe et al. (2022) in several important ways. First of all, Wulfe et al. (2022) do not provide any regret bounds, which is unsatisfactory for theoretical work, and the upper regret bound that is provided by Gleave et al. (2020) is both weaker and less general than ours. In particular, their bound only considers optimal policies, whereas our bound covers all pairs of policies (with optimal policies being a special case). Moreover, we also argue that Gleave et al. (2020) have chosen to quantify regret in a way that fails to capture what we care about in practice. In Appendix A, we provide an extensive theoretical analysis of EPIC, and show that it lacks many of the important theoretical guarantees enjoyed by STARC metrics. In particular, we demonstrate that EPIC fails to induce either an upper or lower bound on worst-case regret (as we define it). We also include an extensive discussion and criticism of DARD in Appendix B. Moreover, in Section 4, we provide experimental data that shows that STARC metrics in practice can have a much tighter correlation with worst-case regret than both EPIC and DARD. This means that STARC metrics both can attain better empirical performance and give stronger theoretical guarantees than the pseudometrics proposed by earlier work. It is important to note that EPIC is designed to be independent of the environment dynamics, whereas both STARC and DARD depend on the transition dynamics. This issue is discussed in Section 2.3. The question of what happens if one reward function is optimised instead of a different reward function is considered by many previous works. A notable example is Ng et al. (1999), which shows that if two reward functions differ by a type of transformation they call potential shaping, then they have the same optimal policies in all environments. Potential shaping is also studied by e.g. Jenner et al. (2022). Another example is Skalse et al. (2022b), which shows that if two reward functions \( R_1, R_2 \) have the property that there are no policies \( \pi_1, \pi_2 \) such that \( J_1(\pi_1) > J_1(\pi_2) \) and \( J_2(\pi_1) < J_2(\pi_2) \), then either \( R_1 \) and \( R_2 \) induce the same ordering of policies, or at least one of them assigns the same reward to all policies. Zhuang & Hadfield-Menell (2021) consider proxy rewards that depend on a strict subset of the features which are relevant to the true reward, and then show that optimising such a proxy in some cases may be arbitrarily bad, given certain assumptions. Skalse et al. (2022a) derive necessary and sufficient conditions for when two reward functions are equivalent, for the purposes of computing certain policies or other mathematical objects. Also relevant is Everitt et al. (2017), which studies the related problem of reward corruption, and Pan et al. (2022), which considers natural choices of proxy rewards for several environments. Unlike these works, we are interested in the question of quantifying the difference between reward functions. ### 1.2 Preliminaries A Markov Decision Processes (MDP) is a tuple \((S, A, \tau, \mu_0, R, \gamma)\) where \( S \) is a set of states, \( A \) is a set of actions, \( \tau : S \times A \rightarrow \Delta(S) \) is a transition function, \( \mu_0 \in \Delta(S) \) is an initial state distribution, \( R : S \times A \times S \rightarrow \mathbb{R} \) is a reward function, and \( \gamma \in (0, 1) \) is a discount rate. A policy is a function \( \pi : S \rightarrow \Delta(A) \). A trajectory \( \xi = \langle s_0, a_0, s_1, a_1, \ldots \rangle \) is a possible path in an MDP. The return function \( G \) gives the cumulative discounted reward of a trajectory, \( G(\xi) = \sum_{t=0}^{\infty} \gamma^t R(s_t, a_t, s_{t+1}) \), and the evaluation function \( J \) gives the expected trajectory return given a policy, \( J(\pi) = \mathbb{E}_{\xi \sim \pi}[G(\xi)] \). A policy maximising \( J \) is an optimal policy. The value function \( V^\pi : S \rightarrow \mathbb{R} \) of a policy encodes the expected future discounted reward from each state when following that policy. We use \( R_i \) to refer to the set of all reward functions. When talking about multiple rewards, we give each reward a subscript \( R_i \), and use \( J_i, G_i, \) and \( V_i^\pi \), to denote \( R_i \)'s evaluation function, return function, and \( \pi \)-value function. In this paper, we assume that all states are reachable under \( \tau \) and \( \mu_0 \). Note that if this is not the case, then all unreachable states can simply be removed from \( S \). Our theoretical results also assume that \( S \) and \( A \) are finite. However, STARC metrics can still be computed in continuous environments. Given a set \( X \), a function \( d : X \times X \rightarrow \mathbb{R} \) is called a pseudometric if \( d(x_1, x_1) = 0, d(x_1, x_2) \geq 0, d(x_1, x_2) = d(x_2, x_1), \) and \( d(x_1, x_3) \leq d(x_1, x_2) + d(x_2, x_3) \), for all \( x_1, x_2, x_3 \in X \). Given two pseudometrics \( d_1, d_2 \) on \( X \), if there are constants \( \ell, u \) such that \( \ell \cdot d_1(x_1, x_2) \leq d_2(x_1, x_2) \leq u \cdot d_1(x_1, x_2) \) for all \( x_1, x_2 \in X \), then \( d_1 \) and \( d_2 \) are bilipschitz equivalent. Given a vector space \( V \), a function \( n : V \rightarrow \mathbb{R} \) is a norm if \( n(v_1) \geq 0, n(v_1) = 0 \iff v_1 = 0, n(c \cdot v_1) = |c| \cdot n(v_1), \) and \( n(v_1 - v_2) \leq n(v_1) + n(v_2) \) for all \( v_1, v_2 \in V, c \in \mathbb{R} \). Given a norm \( n \), we can define a (pseudo)metric \( m \) as \( m(x, y) = n(|x - y|) \). In a mild abuse of notation, we will often denote this metric using \( n \) directly, so that \( n(x, y) = n(|x - y|) \). For any \( p \in \mathbb{N} \), \( L_p \) is the norm given by \( L_p(v) = (\sum |v_i|^p)^{1/p} \). A norm \( n \) is a weighted version of \( n' \) if \( n = n' \circ M \) for a diagonal matrix \( M \). We will use potential shaping, which was first introduced by Ng et al. (1999). First, a potential function is a function \( \Phi : S \rightarrow \mathbb{R} \). Given a discount \( \gamma \), we say that \( R_1 \) and \( R_2 \) differ by potential shaping if for some potential \( \Phi \), we have that \( R_2(s, a, s') = R_1(s, a, s') + \gamma \cdot \Phi(s') - \Phi(s) \). We also use \( S' \)-redistribution (as defined by Skalse et al. (2022a)). Given a transition function \( \tau \), we say that \( R_1 \) and \( R_2 \) differ by \( S' \)-redistribution if \( \mathbb{E}_{S' \sim \tau(s,a)}[R_2(s, a, S')] = \mathbb{E}_{S' \sim \tau(s,a)}[R_1(s, a, S')] \). Finally, we say that \( R_1 \) and \( R_2 \) differ by positive linear scaling if \( R_2(s, a, s') = c \cdot R_1(s, a, s') \) for some positive constant \( c \). We will also combine these transformations. For example, we say that \( R_1 \) and \( R_2 \) differ by potential shaping and \( S' \)-redistribution if it is possible to produce \( R_2 \) from \( R_1 \) by applying potential shaping and \( S' \)-redistribution (in any order). The cases where \( R_1 \) and \( R_2 \) differ by (for example) potential shaping and positive linear scaling, etc. are defined analogously. Finally, we will use the following result, proven by Skalse & Abate (2023) in their Theorem 2.6: **Proposition 1.** \((S, A, \tau, \mu_0, R_1, \gamma)\) and \((S, A, \tau, \mu_0, R_2, \gamma)\) have the same ordering of policies if and only if \( R_1 \) and \( R_2 \) differ by potential shaping, positive linear scaling, and \( S' \)-redistribution. The “ordering of policies” is the ordering induced by the policy evaluation function \( J \). EPIC (Gleave et al., 2020) is defined relative to a distribution \( D_S \) over \( S \) and a distribution \( D_A \) over \( A \), which must give support to all states and actions. It is computed in several steps. First, let \( C^{\text{EPIC}} : \mathcal{R} \to \mathcal{R} \) be the function where \( C^{\text{EPIC}}(R)(s, a, s') \) is equal to \[ R(s, a, s') + \mathbb{E}[\gamma R(s', A, S') - R(s, A, S') - \gamma R(S, A, S')], \] where \( S, S' \sim D_S \) and \( A \sim D_A \). Note that \( S \) and \( S' \) are sampled independently. Next, let the “Pearson distance” between two random variables \( X \) and \( Y \) be defined as \( \sqrt{(1 - \rho(X, Y))/2} \), where \( \rho \) denotes the Pearson correlation. Then the EPIC-distance \( D^{\text{EPIC}}(R_1, R_2) \) is defined to be the Pearson distance between \( C^{\text{EPIC}}(R_1)(S, A, S') \) and \( C^{\text{EPIC}}(R_2)(S, A, S') \), where again \( S, S' \sim D_S \) and \( A \sim D_A \). Note that \( D^{\text{EPIC}} \) is implicitly parameterised by \( D_S \) and \( D_A \). To better understand how EPIC works, it is useful to know that it can be equivalently expressed as \[ D^{\text{EPIC}}(R_1, R_2) = \frac{1}{2} \cdot L_{2,D} \left( \frac{C^{\text{EPIC}}(R_1)}{L_{2,D}(C^{\text{EPIC}}(R_1))}, \frac{C^{\text{EPIC}}(R_2)}{L_{2,D}(C^{\text{EPIC}}(R_2))} \right), \] where \( L_{2,D} \) is a weighted \( L_2 \)-norm. For details, see Appendix E. Here \( C^{\text{EPIC}} \) maps all reward functions that differ by potential shaping to a single representative in their equivalence class. This, combined with the normalisation step, ensures that reward functions which only differ by potential shaping and positive linear scaling have distance 0 under \( D^{\text{EPIC}} \). DARD (Wulfe et al., 2022) is also defined relative to a distribution \( D_S \) over \( S \) and a distribution \( D_A \) over \( A \), which must give support to all actions and all reachable states, but it also requires a transition function \( \tau \). Let \( C^{\text{DARD}} : \mathcal{R} \to \mathcal{R} \) be the function where \( C^{\text{DARD}}(R)(s, a, s') \) is \[ R(s, a, s') + \mathbb{E}[\gamma R(s', A, S'') - R(s, A, S') - \gamma R(S', A, S'')], \] where \( A \sim D_A \), \( S' \sim \tau(s, A) \), and \( S'' \sim \tau(s', A) \). Then the DARD-distance \( D^{\text{DARD}}(R_1, R_2) \) is defined to be the Pearson distance between \( C^{\text{DARD}}(R_1)(S, A, S') \) and \( C^{\text{DARD}}(R_2)(S, A, S') \), where again \( S, S' \sim D_S \) and \( A \sim D_A \). Note that \( D^{\text{DARD}} \) is parameterised by \( D_S, D_A, \) and \( \tau \). ## 2 STARC METRICS In this section we formally define STARC metrics, and provide several examples of such metrics. ### 2.1 A Formal Definition of STARC Metrics STARC metrics are defined relative to an environment, consisting of a set of states \( S \), a set of actions \( A \), a transition function \( \tau \), an initial state distribution \( \mu_0 \), and a discount factor \( \gamma \). This means that many of our definitions and theorems are implicitly parameterised by these objects, even when this dependency is not spelled out explicitly. Our results hold for any choice of \( S, A, \tau, \mu_0, \) and \( \gamma \), as long as they satisfy the assumptions given in Section 1.2. See also Section 2.3. STARC metrics are computed in several steps, where the first steps collapse certain equivalence classes in \( \mathcal{R} \) to a single representative, and the last step measures a distance. The reason for this is that two distinct reward functions can share the exact same preferences between all policies. When this is the case, we want them to be treated as equivalent. This is achieved by standardising the reward functions in various ways before the distance is finally measured. First, recall that neither potential shaping nor \( S' \)-redistribution affects the policy ordering in any way. This motivates the first step: --- Gleave et al. (2020) allow different distributions to be used when computing \( C^{\text{EPIC}}(R) \) and when taking the Pearson distance. However, doing this breaks some of their theoretical results. For details, see Appendix E. Definition 1. A function \( c : \mathcal{R} \to \mathcal{R} \) is a canonicalisation function if \( c \) is linear, \( c(R) \) and \( R \) only differ by potential shaping and \( S' \)-redistribution for all \( R \in \mathcal{R} \), and for all \( R_1, R_2 \in \mathcal{R} \), \( c(R_1) = c(R_2) \) if and only if \( R_1 \) and \( R_2 \) only differ by potential shaping and \( S' \)-redistribution. Note that we require \( c \) to be linear. Note also that \( C^{\text{EPIC}} \) and \( C^{\text{DARD}} \) are not canonicalisation functions in our sense, because we here require canonicalisation functions to simultaneously standardise both potential shaping and \( S' \)-redistribution, whereas \( C^{\text{EPIC}} \) and \( C^{\text{DARD}} \) only standardise potential shaping. In Section 2.2, we provide examples of canonicalisation functions. Let us next introduce the functions that we use to compute a distance: Definition 2. A metric \( m : \mathcal{R} \times \mathcal{R} \to \mathbb{R} \) is admissible if there exists a norm \( p \) and two (positive) constants \( u, \ell \) such that \( \ell \cdot p(x,y) \leq m(x,y) \leq u \cdot p(x,y) \) for all \( x, y \in \mathcal{R} \). A metric is admissible if it is bilipschitz equivalent to a norm. Any norm is an admissible metric, though there are admissible metrics which are not norms.\(^3\) Recall also that all norms are bilipschitz equivalent on any finite-dimensional vector space. This means that if \( m \) satisfies Definition 2 for one norm, then it satisfies it for all norms. We can now define our class of reward metrics: Definition 3. A function \( d : \mathcal{R} \times \mathcal{R} \to \mathbb{R} \) is a STARC metric (STAndardised Reward Comparison) if there is a canonicalisation function \( c \), a function \( n \) that is a norm on \( \text{Im}(c) \), and a metric \( m \) that is admissible on \( \text{Im}(s) \), such that \( d(R_1,R_2) = m(s(R_1), s(R_2)) \), where \( s(R) = c(R)/n(c(R)) \) when \( n(c(R)) \neq 0 \), and \( c(R) \) otherwise. Intuitively speaking, \( c \) ensures that all reward functions which differ by potential shaping and \( S' \)-redistribution are considered to be equivalent, and division by \( n \) ensures that positive scaling is ignored as well. Note that if \( n(c(R)) = 0 \), then \( c(R) \) assigns 0 reward to every transition. Note also that \( \text{Im}(c) \) is the image of \( c \), if \( c \) is applied to the entirety of \( \mathcal{R} \). If \( n \) is a norm on \( \mathcal{R} \), then \( n \) is also a norm on \( \text{Im}(c) \), but there are functions which are norms on \( \text{Im}(c) \) but not on \( \mathcal{R} \) (c.f. Proposition 4). In Appendix C, we provide a geometric intuition for how STARC metrics work. 2.2 Examples of STARC Metrics In this section, we give several examples of STARC metrics. We begin by showing how to construct canonicalisation functions. We first give a simple and straightforward method: Proposition 2. For any policy \( \pi \), the function \( c : \mathcal{R} \to \mathcal{R} \) given by \[ c(R)(s,a,s') = \mathbb{E}_{S' \sim \tau(s,a)} \left[ R(s,a,S') - V^\pi(s) + \gamma V^\pi(S') \right] \] is a canonicalisation function. Here \( V^\pi \) is computed under the reward function \( R \) given as input to \( c \). We call this function Value-Adjusted Levelling (VAL). The proof, as well as all other proofs, are given in the Appendix. Proposition 2 gives us an easy way to make canonicalisation functions, which are also easy to evaluate whenever \( V^\pi \) is easy to approximate. We next give another example of canonicalisation functions: Definition 4. A canonicalisation function \( c \) is minimal for a norm \( n \) if for all \( R \) we have that \( n(c(R)) \leq n(R') \) for all \( R' \) such that \( R \) and \( R' \) only differ by potential shaping and \( S' \)-redistribution. Minimal canonicalisation functions give rise to tighter regret bounds (c.f. Section 3 and Appendix F). It is not a given that minimal canonicalisation functions exist for a given norm \( n \), or that they are unique. However, for any weighted \( L_2 \)-norm, this is the case: Proposition 3. For any weighted \( L_2 \)-norm, a minimal canonicalisation function exists and is unique. A STARC metric can use any canonicalisation function \( c \). Moreover, the normalisation step can use any function \( n \) that is a norm on \( \text{Im}(c) \). This does of course include the \( L_1 \)-norm, \( L_2 \)-norm, \( L_\infty \)-norm, and so on. We next show that \( \max_\pi J(\pi) - \min_\pi J(\pi) \) also is a norm on \( \text{Im}(c) \): Proposition 4. If \( c \) is a canonicalisation function, then the function \( n : \mathcal{R} \to \mathcal{R} \) given by \( n(R) = \max_\pi J(\pi) - \min_\pi J(\pi) \) is a norm on \( \text{Im}(c) \). \(^3\)For example, the unit ball of \( m \) does not have to be convex, or symmetric around the origin. For the final step we of course have that any norm is an admissible metric, though some other metrics are admissible as well.\footnote{For example, if \( m(x, y) \) is the angle between \( x \) and \( y \) when \( x, y \neq 0 \), and we define \( m(0, 0) = 0 \) and \( m(x, 0) = \pi/2 \) for \( x \neq 0 \), then \( m \) is also admissible, even though \( m \) is not a norm.} To obtain a STARC metric, we then pick any canonicalisation function \( c \), norm \( n \), and admissible metric \( m \), and combine them as described in Definition 3. Which choice of \( c, n, \) and \( m \) is best in a given situation may depend on multiple considerations, such as how easy they are to compute, how easy they are to work with theoretically, or how well they together track worst-case regret (c.f. Section 3 and 4). ### 2.3 Unknown Transition Dynamics and Continuous Environments STARC metrics depend on the transition function \( \tau \), through the definition of canonicalisation functions (since \( S' \)-redistribution depends on \( \tau \)). Moreover, \( \tau \) is often unknown in practice. However, it is important to note that while STARC metrics depend on \( \tau \), there are STARC metrics that can be computed without direct access to \( \tau \). For example, the VAL canonicalisation function (Proposition 2) only requires that we can sample from \( \tau \), which is always possible in the reinforcement learning setting. Moreover, if we want to evaluate a learnt reward function in an environment that is different from the training environment, then we can simply use the \( \tau \) from the evaluation environment. As such, we do not consider the dependence on \( \tau \) to be a meaningful limitation. Nonetheless, it is possible to define STARC-like pseudometrics that do not depend on \( \tau \) at all, and such pseudometrics also have some theoretical guarantees (albeit guarantees that are weaker than those enjoyed by STARC metrics). This option is discussed in Appendix F.3. Moreover, we assume that \( S \) and \( A \) are finite, but many interesting environments are continuous. However, it is important to note that while our theoretical results assume that \( S \) and \( A \) are finite, it is still straightforward to compute and use STARC metrics in continuous environments (for example, using the VAL canonicalisation function from Proposition 2). We discuss this issue in more detail in Appendix D. In Section 4, we also provide experimental data from a continuous environment. ### 3 Theoretical Results In this section, we prove that STARC metrics enjoy several desirable theoretical guarantees. First, we note that all STARC metrics are pseudometrics on the space of all reward functions, \( R \): **Proposition 5.** All STARC metrics are pseudometrics on \( R \). This means that STARC metrics give us a well-defined notion of a “distance” between rewards. Next, we characterise the cases when STARC metrics assign two rewards a distance of zero: **Proposition 6.** All STARC metrics have the property that \( d(R_1, R_2) = 0 \) if and only if \( R_1 \) and \( R_2 \) induce the same ordering of policies. This means that STARC metrics consider two reward functions to be equivalent, exactly when those reward functions induce exactly the same ordering of policies. This is intuitive and desirable. For a pseudometric \( d \) on \( R \) to be useful, it is crucial that it induces an upper bound on worst-case regret. Specifically, we want it to be the case that if \( d(R_1, R_2) \) is small, then the impact of using \( R_2 \) instead of \( R_1 \) should also be small. When a pseudometric has this property, we say that it is sound: **Definition 5.** A pseudometric \( d \) on \( R \) is sound if there exists a positive constant \( U \), such that for any reward functions \( R_1 \) and \( R_2 \), if two policies \( \pi_1 \) and \( \pi_2 \) satisfy that \( J_2(\pi_2) \geq J_2(\pi_1) \), then \[ J_1(\pi_1) - J_1(\pi_2) \leq U \cdot (\max_\pi J_1(\pi) - \min_\pi J_1(\pi)) \cdot d(R_1, R_2). \] Let us unpack this definition. \( J_1(\pi_1) - J_1(\pi_2) \) is the regret, as measured by \( R_1 \), of using policy \( \pi_2 \) instead of \( \pi_1 \). Division by \( \max_\pi J_1(\pi) - \min_\pi J_1(\pi) \) normalises this quantity based on the total range of \( R_1 \) (though the term is put on the right-hand side of the inequality, instead of being used as a denominator, in order to avoid division by zero when \( \max_\pi J_1(\pi) - \min_\pi J_1(\pi) = 0 \)). The condition that \( J_2(\pi_2) \geq J_2(\pi_1) \) says that \( R_2 \) prefers \( \pi_2 \) over \( \pi_1 \). Taken together, this means that a pseudometric \( d \) on \( R \) is sound if \( d(R_1, R_2) \) gives an upper bound on the maximal regret that could be incurred under $R_1$ if an arbitrary policy $\pi_1$ is optimised to another policy $\pi_2$ according to $R_2$. It is also worth noting that this includes the special case when $\pi_1$ is optimal under $R_1$ and $\pi_2$ is optimal under $R_2$. Our first main result is that all STARC metrics are sound: **Theorem 1.** All STARC metrics are sound. This means that any STARC metric gives us an upper bound on worst-case regret. Next, we will show that STARC metrics also induce a lower bound on worst-case regret. It may not be immediately obvious why this property is desirable. To see why this is the case, note that if a pseudometric $d$ on $\mathcal{R}$ does not induce a lower bound on worst-case regret, then there are reward functions that have a low worst-case regret, but a large distance under $d$. This would in turn mean that $d$ is not tight, and that it should be possible to improve upon it. In other words, if we want a small distance under $d$ to be both sufficient and necessary for low worst-case regret, then $d$ must induce both an upper and a lower bound on worst-case regret. As such, we also introduce the following definition: **Definition 6.** A pseudometric $d$ on $\mathcal{R}$ is complete if there exists a positive constant $L$, such that for any reward functions $R_1$ and $R_2$, there exist two policies $\pi_1$ and $\pi_2$ such that $J_2(\pi_2) \geq J_2(\pi_1)$ and $$J_1(\pi_1) - J_1(\pi_2) \geq L \cdot (\max_\pi J_1(\pi) - \min_\pi J_1(\pi)) \cdot d(R_1, R_2),$$ and moreover, if both $\max_\pi J_1(\pi) - \min_\pi J_1(\pi) = 0$ and $\max_\pi J_2(\pi) - \min_\pi J_2(\pi) = 0$, then we have that $d(R_1, R_2) = 0$. The last condition is included to rule out certain pathological edge-cases. Intuitively, if $d$ is sound, then a small $d$ is sufficient for low regret, and if $d$ is complete, then a small $d$ is necessary for low regret. Soundness implies the absence of false positives, and completeness the absence of false negatives. Our second main result is that all STARC metrics are complete: **Theorem 2.** All STARC metrics are complete. Theorems 1 and 2 together imply that, for any STARC metric $d$, we have that a small value of $d$ is both necessary and sufficient for a low regret. This means that STARC metrics, in a certain sense, exactly capture what it means for two reward functions to be similar, and that we should not expect it to be possible to significantly improve upon them. We can make this claim formal as follows: **Proposition 7.** Any pseudometrics on $\mathcal{R}$ that are both sound and complete are bilipschitz equivalent. This implies that all STARC metrics are bilipschitz equivalent. Moreover, any other pseudometric on $\mathcal{R}$ that induces both an upper and a lower bound on worst-case regret (as we define it) must also be bilipschitz equivalent to STARC metrics. In Appendix A and B, we provide an extensive analysis of both EPIC and DARD, and show that they fail to induce similar theoretical guarantees. ### 4 EXPERIMENTAL RESULTS In this section we present our experimental results. First, we demonstrate that STARC metrics provide a better estimate of regret than EPIC and DARD in randomly generated MDPs. We then evaluate a STARC metric in a continuous environment. #### 4.1 LARGE NUMBERS OF SMALL RANDOM MDPs Our first experiment compares several STARC metrics to EPIC, DARD, and a number of other non-STARC baselines. In total, our experiment covered 223 different pseudometrics (including rollout regret), derived by creating different combinations of canonicalisation functions, normalisations, and distance metrics. For details, see Appendix G.3. For each pseudometric, we generated a large number of random MDPs, and then measured how well the pseudometric correlates with regret across this distribution. The regret is defined analogously to Definition 5 and 6 except that only optimal policies are considered – for details, see Appendix G.2. We used MDPs with 32 states, 4 actions, $\gamma = 0.95$, a uniform initial state distribution, and randomly sampled sparse non-deterministic transition functions, and for each MDP, we generated several random reward functions. For details on the random generation process, see Appendix G. We compared 49,152 reward function pairs (Appendix G.4), and used these to estimate how well each pseudometric correlates with regret. We show these correlations in Figure 1 and the full data is given in a table in Appendix H. In Appendix H.1, we also provide tables that indicate the impact of changing the metric $m$ or the normalisation function $n$. The canonicalisation functions we used were None (which simply skips the canonicalisation step), $C_{\text{EPIC}}$, $C_{\text{DARD}}$, MinimalPotential (which is the minimal “canonicalisation” that removes potential shaping but not $S'$-redistribution, and therefore is easier to compute), VALPotential (which is given by $R(s,a,s') - V^\pi(s) + \gamma V^\pi(s')$), and VAL (defined in Proposition 2). For both $C_{\text{EPIC}}$ and $C_{\text{DARD}}$, both $\mathcal{D}_S$ and $\mathcal{D}_A$ were chosen to be uniform over $S$ and $A$. For both VALPotential and VAL, $\pi$ was chosen to be the uniformly random policy. Note that VAL is the only canonicalisation which removes both potential shaping and $S'$-redistribution, and thus the only one that meets Definition 1— for this reason, it is listed as “STARC-VAL,” in Figure 1. For the full details about which pseudometrics were chosen, and why, see Appendix G.3. Figure 1: This figure displays the correlation to regret for several pseudometrics. Each point represents one pseudometric, i.e. one unique combination of canonicalisation $c$, normalisation $n$, and distance metric $m$. They are grouped together based on their canonicalisation function, with each column corresponding to a different canonicalisation function. Pseudometrics which skip canonicalisation or normalisation are shown in grey. The versions of EPIC and DARD that use the $L_2$ norm for both normalisation $n$ and distance metric $m$ are highlighted in red, as these are the original versions given in Gleave et al. (2020) and Wulfe et al. (2022). The STARC metrics, which are canonicalised using VAL, are reliably better indicators of regret than the other pseudometrics. As we can see, the STARC metrics based on VAL perform noticeably better than all pre-existing pseudometrics – for instance, the correlation of EPIC to regret is 0.778, DARD’s correlation is 0.782, while VAL’s correlation is 0.856 (when using $L_2$ for both $n$ and $m$, which is the same as EPIC and DARD). Out of the 10 best pseudometrics, 8 use VAL (and the other 2 both use VALpotential). Moreover, for each choice of $n$ and $m$, we have that the VAL canonicalisation performs better than the EPIC canonicalisation in 40 out of 42 cases. Taken together, these results suggest that STARC metrics robustly perform better than the existing alternatives. Our results also suggest that the choice of normalisation function $n$ and metric $m$ can have a significant impact on the pseudometric’s accuracy. For instance, when canonicalising with VAL, it is better to use the $L_1$ norm than the $L_2$ norm for both normalisation and taking the distance – this increases the correlation with regret from 0.856 to 0.873. Another example is the EPIC canonicalisation – when paired with the weighted $L_\infty$ norm for normalisation and the (unweighted) $L_\infty$ norm for taking the distance, instead of using the $L_2$ norm for both, its correlation decreases from 0.778 to 0.052. As we can see in Figure 1, this effect appears to be more prominent for the non-STARC metrics. Another thing to note is that it seems like VALpotential can perform as well as VAL despite not canonicalising for $S'$-redistribution, but only when a ($\tau$-)weighted norm is used. This may be because $\tau$-weighted norms set all impossible transitions to 0, and reduce the impact of very unlikely transitions; plausibly, this could in practice be similar to canonicalising for $S'$-redistribution. When using VAL, $L_1$ was the best unweighted norm for both $m$ and $n$ in our experiment. The only exceptions are when no normalisation is used and $m = L_\infty$, and when $n = \text{weighted}-L_2$ and $m = \text{weighted}-L_\infty$. However, in the first case, both the EPIC-based and the VAL-based pseudometric perform badly (since no normalisation is used), and in the second case, the difference between them is not large. 4.2 The Reacher Environment Our next experiment estimates the distance between several hand-crafted reward functions in the Reacher environment from MuJoCo (Todorov et al., 2012). This is a deterministic environment with an 11-dimensional continuous state space and a 2-dimensional continuous action space. The reward functions we used are: 1. **GroundTruth**: The Euclidean distance to the target, plus a penalty term for large actions. 2. **PotentialShaped**: GroundTruth with random potential shaping. 3. **SecondPeak**: We create a second target in the environment, and reward the agent based on both its distance to this target, and to the original target, but give a greater weight to the original target. 4. **Random**: A randomly generated reward, implemented as an affine transformation from \( s, a, s' \) to real numbers with the weights and bias randomly initialised. 5. **Negative**: Returns \(-\text{GroundTruth}\). We expect GroundTruth to be equivalent to PotentialShaped, similar to SecondPeak, orthogonal to Random, and opposite to Negative. We used the VAL canonicalisation function with the uniform policy, and normalised and took the distance with the \( L_2 \)-norm. This pseudometric was then estimated through sampling; full details can be found in Appendix D and I. The results of this experiment are given in Table 1. As we can see, the relative ordering of the reward functions match what we expect. However, the magnitudes of the estimated distances are noticeably larger than their real values; for example, the actual distance between GroundTruth and PotentialShaped is 0, but it is estimated as \( \approx 0.9 \). The reason for this is likely that the estimation involves summing over absolute values, which makes all noise positive. Nonetheless, for the purposes of ranking the rewards, this is not fundamentally problematic. | PotentialShaped | SecondPeak | Random | Negative | |-----------------|------------|--------|----------| | 0.8968 | 1.2570 | 1.3778 | 1.706 | Table 1: This figure displays the estimated distance (using \( c = \text{VAL}, n = L_2, \) and \( m = L_2 \)) between each reward function in the Reacher environment and the GroundTruth reward function. 5 Discussion We have introduced STARC metrics, and demonstrated that they provide compelling theoretical guarantees. In particular, we have shown that they are both sound and complete, which means that they induce both an upper and a lower bound on worst-case regret. As such, a small STARC distance is both necessary and sufficient to ensure that two reward functions induce a similar ordering of policies. Moreover, any two pseudometrics that are both sound and complete must be bilipschitz equivalent. This means that any pseudometric on \( R \) that has the same theoretical guarantees as STARC metrics must be equivalent to STARC metrics. This means that we have provided what is essentially a complete answer to the question of how to correctly measure the distance between reward functions. Moreover, our experiments show that STARC metrics have a noticeably better empirical performance than any existing pseudometric in the current literature, for a wide range of environments. This means that STARC metrics offer direct practical advantages, in addition to their theoretical guarantees. In addition to this, STARC metrics are both easy to compute, and easy to work with mathematically. As such, STARC metrics will be useful for both empirical and theoretical work on the analysis and evaluation of reward learning algorithms. Our work can be extended in a number of ways. First of all, it would be desirable to establish more conclusively which STARC metrics work best in practice. Our experiments are indicative, but not conclusive. Secondly, our theoretical results assume that \( S \) and \( A \) are finite; it would be desirable to generalise them to continuous environments. Third, we use a fairly strong definition of regret. We could consider some weaker criterion, that may allow for the creation of more permissive reward metrics. Finally, our work considers the MDP setting – it would be interesting to also consider other classes of environments. We believe that the multi-agent setting would be of particular interest, since it introduces new and more complex dynamics that are not present in the case of MDPs. ACKNOWLEDGEMENTS The authors wish to acknowledge and thank the financial support of the UK Research and Innovation (UKRI) [Grant ref EP/S022937/1] and the University of Bristol. REFERENCES Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences, 2017. Tom Everitt, Victoria Krakovna, Laurent Orseau, Marcus Hutter, and Shane Legg. Reinforcement learning with a corrupted reward channel. CoRR, abs/1705.08417, 2017. URL http://arxiv.org/abs/1705.08417 Eugene A. Feinberg and Uriel G. Rothblum. Splitting randomized stationary policies in total-reward markov decision processes. Mathematics of Operations Research, 37(1):129–153, 2012. ISSN 0364765X, 15265471. URL http://www.jstor.org/stable/41412346 Adam Gleave, Michael Dennis, Shane Legg, Stuart Russell, and Jan Leike. Quantifying differences in reward functions, 2020. URL https://arxiv.org/abs/2006.13900 Borja Ibarz, Jan Leike, Tobias Pohlen, Geoffrey Irving, Shane Legg, and Dario Amodei. Reward learning from human preferences and demonstrations in Atari. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, volume 31, pp. 8022–8034, Montréal, Canada, 2018. Curran Associates, Inc., Red Hook, NY, USA. Erik Jenner, Herke van Hoof, and Adam Gleave. Calculus on MDPs: Potential shaping as a gradient, 2022. URL https://arxiv.org/abs/2208.09570 Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization, 2019. Andrew Y Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning, volume 1, pp. 663–670, Stanford, California, USA, 2000. Morgan Kaufmann Publishers Inc. Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the Sixteenth International Conference on Machine Learning, pp. 278–287, Bled, Slovenia, 1999. Morgan Kaufmann Publishers Inc. Alexander Pan, Kush Bhatia, and Jacob Steinhardt. The effects of reward misspecification: Mapping and mitigating misaligned models, 2022. URL https://arxiv.org/abs/2201.03544 Gavin A Rummery and Mahesan Niranjan. On-line Q-learning using connectionist systems, volume 37. University of Cambridge, Department of Engineering Cambridge, UK, 1994. Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Pearson, 4 edition, 2020. Joar Skalse and Alessandro Abate. Misspecification in inverse reinforcement learning, 2023. Joar Skalse, Matthew Farrugia-Roberts, Stuart Russell, Alessandro Abate, and Adam Gleave. Invariance in policy optimisation and partial identifiability in reward learning. arXiv preprint arXiv:2203.07475, 2022a. Joar Skalse, Niki Howe, Krasheninnikov Dima, and David Krueger. Defining and characterizing reward hacking. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2022b. Richard S Sutton and Andrew G Barto. Reinforcement Learning: An Introduction. MIT Press, second edition, 2018. ISBN 9780262352703. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033, 2012. doi: 10.1109/IROS.2012.6386109.
iTsHStJKcm
Within Table 1, the Baguette demonstrates a volume change exceeding 30%. What does this numerical value signify, and how might it influence the manipulation process? Providing corresponding visual results would enhance clarity in understanding the observed data and its potential impact on the manipulation process.
MAKE A DONUT: LANGUAGE-GUIDED HIERARCHICAL EMD-SPACE PLANNING FOR ZERO-SHOT DEFORMABLE OBJECT MANIPULATION Anonymous authors Paper under double-blind review Figure 1: Schematic illustration of our method in handling unseen dough making tasks, where Language Models (LLMs) are utilized at a high level for task decomposition and subgoal generation, specifying tool names and generating corresponding Python code. The low-level operates on particle space controls, precisely determining the next achievable candidate iteratively without the need for prior demonstrations or task-specific training. ABSTRACT Deformable object manipulation stands as one of the most captivating yet formidable challenges in robotics. While previous techniques have predominantly relied on learning latent dynamics through demonstrations, typically represented as either particles or images, there exists a pertinent limitation: acquiring suitable demonstrations, especially for long-horizon tasks, can be elusive. Moreover, basing learning entirely on demonstrations can hamper the model’s ability to generalize beyond the demonstrated tasks. In this work, we introduce a demonstration-free hierarchical planning approach capable of tackling intricate long-horizon tasks without necessitating any training. We employ large language models (LLMs) to articulate a high-level, stage-by-stage plan corresponding to a specified task. For every individual stage, the LLM provides both the tool’s name and the Python code to craft intermediate subgoal point clouds. With the tool and subgoal for a particular stage at our disposal, we present a granular closed-loop model predictive control strategy. This leverages Differentiable Physics with Point-to-Point correspondence (DiffPhysics-P2P) loss in the earth mover distance (EMD) space, applied iteratively. Experimental findings affirm that our technique surpasses multiple benchmarks in dough manipulation, spanning both short and long horizons. Remarkably, our model demonstrates robust generalization capabilities to novel and previously unencountered complex tasks without any preliminary demonstrations. We further substantiate our approach with experimental trials on real-world robotic platforms. 1 INTRODUCTION Manipulation of deformable objects remains one of the most intricate challenges in robotics due to the inherent complexity and unpredictable behavior of such objects. Deformable objects can be broadly categorized into two major categories: thin-shell surfaces, such as clothes [Nocentini et al. (2022); Wu et al. (2019)] and ropes [Sundaresan et al. (2020)]; and volumetric objects, such as dough [Lin et al. (2022c); Huang et al. (2021)]. In this paper, we focus on the latter and mainly study the manipulation of dough as a typical representative. Existing works on dough-like volumetric deformable objects majorly rely on a learned dynamic model of the underlying particles [Zhu et al. (2022); Arriola-Rios et al. (2020); Yin et al. (2021)]. However, these methods all require a substantial amount of collected or semi-auto-generated demonstrations for training the dynamic models, which poses two critical issues: firstly, the difficulty of obtaining a comprehensive set of demonstrations, particularly for long-horizon tasks; and more importantly, the limited capability of generalizing beyond the scope of the provided demonstrations. Given this context, there is an imperative need for a more versatile and universally applicable approach to deformable object manipulation, one that can navigate the intricacies of both short and long-horizon tasks, without being overly reliant on demonstrations. This paper introduces a novel demonstration-free hierarchical planning method that addresses these challenges. In this study, we delve into the manipulation of dough, a quintessential example of deformable object manipulation [Lin et al. (2022c,b); Huang et al. (2021)]. As illustrated in Figure 1, our approach takes a natural language user prompt as input and leverages a large language model (LLM) to formulate a high-level plan detailing the selection of tools and the representation of intermediate subgoals at each phase. While LLMs might not produce precise low-level actions for each timestep, they exhibit proficiency in breaking down intricate tasks into manageable stages. Each of these stages exclusively involves a single tool and a piece of dough. The concept of anchoring language to a sequential plan has been investigated in prior research [Huang et al. (2022); Ahn et al. (2022); Liang et al. (2023)]. However, such methodologies have largely been confined to generating high-level linguistic instructions for robots for generic household tasks (e.g., “open the fridge” or “bring me the apple”). They haven’t been tailored for intricate tasks like deformable object manipulation. Indeed, there is a significant gap in literature when it comes to utilizing LLMs for manipulating deformable entities, especially when the challenge entails crafting complex shapes (like donuts or baguettes) based purely on linguistic outputs. In our approach, rather than defining the robot’s actions or policy linguistically at intermediate stages, we direct LLMs to express their object-centric state visualizations via Python code. This distinctive approach sets our method apart from previous techniques. In bridging adjacent subgoal imaginations generated from LLMs, we introduce a simple but novel EMD space planning algorithm complemented by model predictive control. The algorithm evaluates the gradient of the earth mover’s distance between the prevailing point cloud and the target point cloud concerning each discrete point. Subtracting this gradient from the current point cloud yields the succeeding viable candidate. This process facilitates a direct point-to-point correspondence between the existing state and the upcoming candidate, enabling the deployment of differentiable physics based on a straightforward per-point mean squared error. Through this hierarchical strategy, our system can adeptly tackle novel, intricate tasks devoid of any demonstrations or prior training. Experimental results demonstrate that our methodology markedly enhances the efficacy of both single-tool and multiple-tool dough manipulation endeavors and can potentially transfer to real-world robotic applications. 2 RELATED WORKS Differentiable physics for deformable object manipulation. Differentiable physics is a pivotal technique in deformable object manipulation. It exploits the gradient from differentiable simulators to derive optimal actions. Existing literature [Hu et al. (2019a); Huang et al. (2021); Hu et al. (2019b); Liang et al. (2019)] reveals that differentiable physics offers an efficient means to tackle short-horizon simple tasks. Nevertheless, as highlighted by [Antonova et al. (2023)], the reliance of differentiable physics on local gradients poses challenges. The loss landscape is often rugged with potentially spurious local optima, making it a less reliable method for certain tasks. Long-horizon planning for deformable object manipulation. There’s an emerging interest in long-horizon strategies for deformable object manipulation. DiffSkill [Lin et al. (2022b)] employs a latent space planner to assess various skill combinations and sequences to achieve the desired outcome. Subsequently, PASTA [Lin et al., 2022c] introduced the PIAnning with Spatial-Temporal Abstraction framework, melding spatial and temporal abstraction for effective long-horizon task planning. Robocraft [Shi et al., 2022] advances a particle-based dynamics model using graph neural networks (GNNs) to grasp the system’s structural nuances. This knowledge is then harnessed to optimize action sequences from a randomly initialized trajectory. Robocook [Shi et al., 2023] adopts point cloud scene representation and leverages GNNs for modeling tool-object interactions. It then synergizes tool classification with self-supervised policy learning for crafting object manipulation plans. Nonetheless, these methodologies have their constraints. They necessitate prior insight into potential tool candidates and a predetermined number of stages for task planning, which affects their adaptability. **Language models for robot manipulations.** Leveraging large language models for robotics is currently a bustling research domain. Recent studies such as [Huang et al., 2022; Liang et al., 2023; Ahn et al., 2022] strive to dissect complex tasks into manageable sub-stages. These methods, although innovative, are primarily innocent to the underlying geometry, providing only high-level robot directives. To enrich these models with diverse modalities, SM [Zeng et al., 2022] developed a modular framework where new tasks are delineated as a linguistic interaction between pre-trained models and auxiliary modules, underpinned by Socratic reasoning. PaLM-E [Driess et al., 2023] engineered a versatile embodied multimodal language model tailored for a spectrum of downstream reasoning challenges. VoxPoser [Huang et al., 2023] harnesses LLMs to craft voxel value maps in a 3D observation space, guiding robot-environment interactions. Since LLMs often cannot directly produce the robot’s raw actions, an alternative approach is to map intricate tasks to rewards. Some other studies [Goyal et al., 2019; Lin et al., 2022a] focus on curating domain-specific reward models, which necessitate abundant annotated training data. In contrast, works like [Kwon et al., 2023; Yu et al., 2023] generate reward metrics automatically from pretrained LLMs, though their application is predominantly limited to rigid or articulated objects. Deformable object manipulations remain a relatively under-explored territory for LLMs, largely due to the immense degrees of freedom inherent to such tasks and the paucity of available demonstration data. ### 3 Method Our method adopts a hierarchical planning approach combining both language models and low-level particle space controls. At the top level, LLMs are employed to break down a complex task into sub-stages, and output both the code to generate subgoal states and the tool name for each. We observe that LLMs obtain rich knowledge about high-level task semantics though cannot directly output raw low-level actions. At the bottom level, given the current tool and subgoal, our technique iteratively identifies the next reachable target based on the present state and subgoal. A key distinction between our approach and previous ones is that ours doesn’t necessitate any demonstration or training for the target task. In Section [3.1], we detail the partitioning of complex tasks into sub-stages and the associated tools. Section [3.2] elaborates on the iterative process of determining the next goal based on the current state and subgoal. An overview of our method is illustrated in Figure [1]. Our method processes the sampled particles from the volumetric dough as input and produces the actions of the currently used tool as output. In the subsequent context, we will interchangeably use point clouds and particles. #### 3.1 Multiple Tool Selection and Hierarchical Planning To address the challenge of coordinating between different tools in long-horizon tasks, we turn to the capabilities of large language models (LLMs), especially ChatGPT-4 [OpenAI, 2023]. Our observations indicate that while LLMs may not precisely produce low-level actions, they excel in deconstructing intricate long-horizon tasks into several stages, each centered around a single tool. What’s more, for each of these stages, the LLM can both identify the appropriate tool and generate the corresponding Python code to produce intermediate subgoal point clouds. Presented in the form of particles, these subgoals readily align with the target requirements of our proposed single-tool EMD space planning algorithm. To guide this process, we devised a prompt template that imparts to the LLM foundational information about available tools and their potential interactions with the dough. Additionally, we introduce a set of guidelines designed to refine and direct the LLM when completing more complex, long-horizon deformable tasks. One important guideline is to force the LLM to give volume-preserving input and output at each stage, so the target is more physically realistic. In addition, we leverage the chain reasoning technique \cite{wei2022chain} to help the LLM better deduce the shape parameters to satisfy this constraint. In Table 1, we provide the average relative volume change between the LLM’s generated final output and the input dough for all the three evaluated tasks. Details of the prompt template can be found in Appendix A.7. | | Donut ↓ | Baguette ↓ | TwoPancakes ↓ | |----------------------|---------|------------|---------------| | w/o Volume Preserving and Chain Reasoning | 73.9% | 42.5% | 65.0% | | Ours | 9.8% | 38.9% | 0.0% | Table 1: Volume change with and without volume-preserving and chain reasoning. In addition, we also ask the LLM to output the following items for each stage during planning: - A one-line explanation of what this step is doing. - The name of the tool to be used. - The Python code to generate target point clouds. Do remember to add their absolute world location when generating complex shapes. - The variable names for the input and output. - The location of each piece in a dictionary format with a variable name as the key. - The volume of each piece is also in a dictionary format with a variable name as the key. Building on this, we only extract the generated Python code corresponding to each stage, from which we produce the intermediate subgoal point cloud. Other outputs, though not used, are part of chain reasoning and, therefore contribute to the final quality of the generated subgoal. The subgoal point cloud is then fed directly into our single-tool planning module. Consequently, we are equipped to tackle intricate tasks incrementally, stage by stage, eliminating the need for demonstrations. ### 3.2 Single Tool Planning As the LLM can decompose a complex task into several stages with generated sub-targets, on each stage, given the current point cloud and the sub-target, we introduce a novel goal-aware planning algorithm with model predictive control. At each step of the single-tool planning algorithm, our method initially identifies an optimal starting position for the tool. Subsequently, it forecasts the nearest attainable target and refines the actions employing differentiable physics with point-to-point correspondences, termed DiffPhysics-P2P. If this step doesn’t yield progress, our model reverts the actions and re-strategizes using a new starting position. In this section, we will first talk about DiffPhysics-P2P, then the initial tool selection, and finally the failure-ware tool resetting techniques. **DiffPhysics-P2P.** Given the current point cloud and the goal, we pinpoint the subsequent reachable point cloud by executing multiple small steps (specifically 20, as per our experiments) via gradient descent within the Earth Mover Distance (EMD) space. Formally, each step of gradient descent is: \[ p'_i = p_i - \alpha \cdot \frac{\partial \text{emd}(\{p_i\}, \{p_i\})}{\partial p_i}. \] Given the current point cloud, denoted as \(p_i\), where \(i\) is the point index, and the subgoal \(p_i\), our objective is to discern the ensuing reachable candidate. This is achieved by incrementally transitioning the current point cloud towards the target within the EMD space. The candidate serves as our model’s prediction of the underlying particle dynamics. Figure 2: **EMD-space planning with DiffPhysics-P2P.** We find the next reachable target by running small steps within the EMD space. The induced point-to-point correspondence can provide better gradients when optimizing actions through differential physics. Figure 3: **Illustration on how tool reset works.** By resetting the tool position when no improvement can be made, we can jump out of the local minima and get a better global solution. A notable advantage of the EMD gradient is its inherent capacity to furnish a one-to-one correspondence between $p'_i$ and $p_i$, as elucidated by Equation (1). This characteristic permits the application of the subsequent straightforward mean-squared-error loss using differentiable physics: $$L = \sum_i \|p'_i - p_i\|_1.$$ (2) This diverges from several preceding methodologies, wherein the naive EMD loss (expressed as $\text{emd}(p_i, p'_i)$) is employed devoid of any point-to-point correspondence. Ablation studies underscore that our point-to-point correspondence substantially outperforms traditional differential physics by enhancing the gradient flow. An illustration of this algorithm is given in Figure [2]. **Initial position selection.** The aforementioned EMD planning algorithm demonstrates efficacy when the initial tool position is in proximity to the dough. However, challenges arise when the tool is situated at a considerable distance from the dough, resulting in the algorithm’s inability to find the global minima. To figure out a good initial position for the tool, we leverage the strategy initially proposed in Li et al. (2022). Specifically, with the present deformation field deduced from the induced point-to-point correspondence (represented as $p'_i - p_i$), we employ the following equation to identify the candidate tool position: $$x^* = \arg\max_x \sum_i \frac{\|p'_i - p_i\|_1}{\text{sdf}_x(p_i) + \delta}.$$ (3) The numerator encapsulates the point-to-point correspondence loss of the extant point cloud. In contrast, the denominator represents the signed distance field (SDF) of the tool when positioned at $x$, evaluated at particle $p_i$. From an intuitive perspective, the goal is the tool’s strategic placement in close proximity to the dough, while emphasizing significant deformations within the EMD space. For practical application, we introduce a minimal margin $\delta$ in the denominator to circumvent numerical instability issues. **Tool reset upon failures.** Integrating the previously described techniques allows us to first select an optimal initial tool position, followed by iteratively progressing towards the target as per Equation (1). However, even with an advantageous starting tool position, challenges may arise due to the inherent intricacies of differentiable physics. As illustrated in Figure 3, consider a scenario where the task is to use a rolling pin to flatten the dough into a sphere. While initiating from a favorable position, iterating and optimizing candidates in the EMD space could land us at a local minimum. This could result in an uneven texture on one side of the dough, manifesting as a lump. To circumvent this predicament, we reset the tool’s position if no advancement is observed, ensuring an escape from potential local minima. The comprehensive single-tool iterative planning process is detailed in Algorithm 1. **Algorithm 1** Goal-Aware Planning with Model Predictive Control **Input:** : Current system particles \( \{p_i\} \), target particles \( \{\tilde{p}_i\} \) **Output:** : Predicted actions at each timestep \( \{a_t\} \) 1: \( t := 0 \), need\_reset := 1, emd\_last := \( \infty \) 2: while \( t < \text{max\_steps} \) do 3: \( \{p'_i\} := \{p_i\} \) 4: for \( k \) in 1...K do 5: \( \{p'_i\} := \{p'_i\} - \alpha \cdot \nabla_{\{p'_i\}} \text{emd}(\{p'_i\}, \{\tilde{p}_i\}) \) ▶ move particles along the grad. of emd 6: end for 7: Set \( \{p'_i\} \) as the next reachable candidate 8: if need\_reset then 9: \( x^* = \arg\max_x \frac{\|p'_i - p_i\|_1}{\text{sdf}(x(p_i)) + \delta} \) ▶ find the optimal initial tool position 10: Set initial tool position to \( x^* \) 11: need\_reset = 0 12: end if 13: \( a_{t:t+L} := 0 \) ▶ initialize actions for horizon \( L \) 14: for \( j \) in 1...J do 15: \( a_{t:t+L} := a_{t:t+L} - \nabla_a \text{Sim}(\sum_i \|p'_i - p_i\|_1) \) ▶ do diff. physics with p2p corr. 16: end for 17: Execute \( a_{t:t+L} \) and get the particle state \( \{\tilde{p}_i\} \) ▶ \( \{\tilde{p}_i\} \) may be different from \( \{p'_i\} \) 18: \( \text{emd}_{\text{curr}} := \text{emd}(\{\tilde{p}_i\}, \{p_i\}) \) ▶ calculate the actual emd after executing \( a_{t:t+L} \) 19: if \( \text{emd}_{\text{curr}} > \text{emd}_{\text{last}} \) then ▶ if not making any progress 20: \( \text{need\_reset} = 1 \) ▶ find another initial position to jump out of local minimum 21: continue 22: end if 23: if \( \text{emd}_{\text{curr}} < \tau \) then ▶ if we are close enough to the target do early termination 24: break 25: end if 26: \( t = t + L \), \( \text{emd}_{\text{last}} = \text{emd}_{\text{curr}}, \{p_i\} = \{\tilde{p}_i\} \) ▶ run \( L \) steps forward 27: end while 4 EXPERIMENT In our experiments, we validate the efficacy of our method across four distinct dimensions. In Section 4.1, we deploy our hierarchical LLM-guided planning algorithm to more intricate and unseen long-horizon tasks. In Section 4.2, we conduct simpler tasks involving only one tool to authenticate the effectiveness of our single-tool planning algorithm. In Section 4.3, we perform distinct ablation studies for both single-tool and multiple-tool planning algorithms. These studies are crucial for validating the individual components of our algorithms, confirming the contribution of each component to the overall performance of the system. In Section 4.4, we translate our simulated actions to a real-world robot to manipulate the actual dough, illustrating the practical applicability and transferability of our method from sim to real. Baselines. We employ the following baselines for comparisons: Firstly, we consider a simplistic gradient-based action optimization method utilizing differentiable physics, denoted as Diff. Physics. Secondly, we examine a sophisticated long-horizon planning algorithm, PASTA [Lin et al., (2022c)], which integrates both spatial and temporal abstraction. Thirdly, we explore a behavior cloning method that trains a goal-conditioned policy, abbreviated as BC. Lastly, we assess a model-free RL method, SAC-N [An et al., (2021)]. For complex tasks, generating a large dataset of demonstrations is intractable. Thus we manually annotate a single demonstration sequence for each task for training BC and SAC-N. For PASTA, we employ its pre-trained model on single-tool demonstrations and assess its generalization capabilities on these unseen, intricate tasks. All the compared methods receive the point cloud of particles as input. **Metrics.** We adopt the metrics in [Lin et al. (2022b,c)](https://example.com) and report the normalized decrease (score) in the Earth Mover Distance (EMD), which is computed using the Sinkhorn divergence as $$\text{emd}(\{p_0\}, \{p^*\}) - \text{emd}(\{p_T\}, \{p^*\}),$$ where \( \{p_0\} \) is the initial point cloud, \( \{p_T\} \) is the final point cloud after execution, and \( \{p^*\} \) is the ground-truth target 3D point cloud. Additionally, we also calculate the success rates based on pre-set thresholds (Appendix A.1). For each multiple-tool task, we utilize the LLM’s Python code output from the last stage to generate the point cloud, serving as the ground truth. We observe that these point clouds, despite being generated by the LLM, are of high quality and adeptly describe the target shape. For single-tool tasks, we follow previous literature [Lin et al. (2022c,b)](https://example.com) to sample 5 random targets at different shapes and locations. Samples of the generated target point clouds are shown in Appendix A.4. For each multiple-tool task, we generate 4 distinct responses from the LLM, with each response being assessed 5 times, culminating in a total of 20 trials per task; for each single-tool task, we follow previous literature to evaluate one trial per target, resulting in 5 trials per task. ### 4.1 Multiple-Tool Selection and Planning **Environment setup.** We examine three intricate long-horizon tasks that necessitate the use of multiple tools: Donut, Baguette, and TwoPancakes. As implied by their names, these tasks require the agent to create a donut, a baguette, and two pancakes, respectively. In the TwoPancakes task, the dough is initially presented as a rectangle, prepared and ready for cutting. For the Donut and Baguette tasks, the dough is initially provided in the form of a unit ball. A more comprehensive description of each task can be found in Appendix A.1. **Results.** The quantitative results are presented in Table 2's left. It is evident from the data that our method substantially surpasses preceding approaches, exhibiting superior performance across all three tasks by a considerable margin. It is crucial to underscore that our model has never been exposed to these tasks before, and it employs the high-level stage plans generated by the LLM and the single-tool EMD space planning method to dynamically generate actions. A detailed, stage-by-stage visual representation of the process is provided in Figure 4's left, illustrating the nuanced steps and strategies employed by our method in navigating and accomplishing the tasks. The key distinction of our approach lies in its zero-shot learning ability, which enables it to adapt to novel tasks without task-specific fine-tuning or training. This is a significant leap over the sampled-based methods, which may not provide a feasible path or require extensive data for complex shapes. The Large Language Model (LLM) plays a crucial role in our framework by charting a high-level planning path, which serves as a guide for the subsequent execution by the low-level Earth Mover’s Distance (EMD) space planning algorithm. This hierarchical structure is pivotal; the LLM alone cannot translate its generated plans into the raw actions required for robotic manipulation. Conversely, without the strategic direction provided by the LLM, the EMD space planning lacks a coherent objective, struggling to discern what end states are physically plausible for the robot to achieve. PASTA, while effective within its demonstrated scope, requires a dataset to train on in order to sample feasible intermediate states and is thus inherently limited in its ability to generalize to new shapes, e.g., donut. All their modules like VAE, cost predictor, etc., are tailored to their collected training data. This data-driven dependency hinders its application to the more complex tasks our framework successfully tackles. More qualitative comparisons are given in Appendix A.5. | Method | Multiple-tool tasks | Single-tool tasks | |------------|---------------------|-------------------| | | Donut | Baguette | TwoPancakes | Spread | Cut | Arrange | | Diff. Physics | 0.141/0% | 0.175/20% | 0.583/0% | 0.184/20% | 0.401/60% | 0.296/20% | | PASTA | 0.020/0% | -0.116/0% | -0.856/0% | 0.155/20% | 0.060/40% | 0.052/0% | | BC | 0.001/0% | -0.606/0% | 0.220/0% | 0.441/60% | -0.488/20% | -0.512/0% | | SAC-N | 0.003/0% | -0.306/0% | 0.127/0% | 0.000/0% | -2.827/0% | 0.267/0% | | Ours | **0.346/75%** | **0.501/75%** | **0.858/65%** | **0.680/100%** | **0.685/100%** | **0.981/100%** | Table 2: Quantitative comparisons on both single-tool and multiple-tool tasks. 4.2 Single-Tool Planning Environment setup. In the simulation environment provided by PASTA [Lin et al., 2022c], we examine three elementary tasks related to dough manipulation: Spread, Cut, and Arrange. Each of these tasks necessitates the use of only one tool at a time and can be finished within 200 time steps. More task descriptions can be found in Appendix A.1. Results. Quantitative outcomes are presented in Table 2. It is evident that our approach consistently surpasses previous baselines by a substantial margin. The baselines fail to accomplish these tasks, whereas our method attains a 100% success rate in all of them. It is also noteworthy that, in contrast to prior learning-based approaches, our method does not necessitate any demonstration data. Remarkably, our method does not even entail any training, rendering it immediately applicable to new tasks. Qualitative results are illustrated on the right of Figure 4. 4.3 Ablation Studies Multiple-tool ablations. Figure 5 left presents our ablation studies on planning without the incorporation of high-level plans generated by the LLM. Additionally, we conduct ablation on the volume-preserving guidance with chain reasoning, which is proved to be crucial for maintaining both a high success rate and consistent target volume generation. Single-tool ablations. Figure 5 right presents our ablation studies on the removal of the DiffPhysics-P2P, the initial position selection component, or the tool resetting module within the single-tool planning algorithm. 4.4 Real-Robot Experiments Environment setup. The application of this algorithm to a real-world robot is of immense interest. For this purpose, an experimental setup is established using the UFACTORY xArm 6 and some clay. Given the experimental setting and observations, the proposed planning algorithm is used to generate a trajectory in the simulator, and subsequently, the controller is employed to execute this trajectory in the real world. Despite the existence of discrepancies due to physical constraints, such as the real clay being stickier than in the simulation, the method has demonstrated its performance in generating accurate trajectories and accomplishing tasks. In Figure 6, qualitative real-robot trajectories are provided for one multiple-tool task: TwoPancakes, and two single-tool tasks: Cut, Spread. 5 Conclusions We introduced a new hierarchical planning method for robotic deformable object manipulation, enabling complex tasks without prior demonstrations. This method surpasses previous demonstration- based techniques, ensuring better adaptability to new scenarios. Using large language models, it generates high-level plans and intermediate goals, which are executed through a unique closed-loop predictive control using Differentiable Physics. Our approach showed exceptional performance and adaptability in dough manipulation benchmarks, marking a significant step forward in deformable object manipulation. **Figure 7:** “Make a bowl.” An example that leverages off-the-shelf text-to-3D algorithms to generate complex shapes. **Limitations.** While our method is adept at handling complex tasks involving long-horizon planning, it is limited to generating simple shapes that can be described with the Python codes produced by the LLM. Generating Python code to describe the shape of an arbitrary object is extremely challenging, if not impossible, for the LLM. However, we have made efforts to circumvent this limitation by having the LLM generate intermediate text descriptions that can be input into state-of-the-art Text-to-3D generative models, such as Point-E [Nichol et al., 2022]. Figure 7 illustrates an example of creating a bowl from the dough. The shape of a bowl is difficult to describe using pure Python code, so the LLM outputs text descriptions for each subgoal, like “Flattened Sphere” and “Bowl”. These text descriptions are then interpreted by Point-E to generate 3D point clouds. We can then proceed with our planning algorithm, using these point clouds as our intermediate targets. As LLMs continue to evolve, we anticipate more accurate and intricate shape generation that will integrate with our proposed framework. More discussions of future works can be found in Appendix A.10. REFERENCES Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022. Gaon An, Seungyong Moon, Jang-Hyun Kim, and Hyun Oh Song. Uncertainty-based offline reinforcement learning with diversified q-ensemble. Advances in neural information processing systems, 34:7436–7447, 2021. Rika Antonova, Jingyun Yang, Krishna Murthy Jatavallabhula, and Jeannette Bohg. Rethinking optimization with differentiable simulation from a global perspective. In Conference on Robot Learning, pp. 276–286. PMLR, 2023. Veronica E Arriola-Rios, Puren Guler, Fanny Ficuciello, Danica Kragic, Bruno Siciliano, and Jeremy L Wyatt. Modeling of deformable objects for robotic manipulation: A tutorial and review. Frontiers in Robotics and AI, 7:82, 2020. Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. Jean Feydy, Thibault Séjourné, François-Xavier Vialard, Shun-ichi Amari, Alain Trouve, and Gabriel Peyré. Interpolating between optimal transport and mmd using sinkhorn divergences. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 2681–2690, 2019. Prasoon Goyal, Scott Niekum, and Raymond J Mooney. Using natural language for reward shaping in reinforcement learning. arXiv preprint arXiv:1903.02020, 2019. Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, and Frédó Durand. Diffetaichi: Differentiable programming for physical simulation. arXiv preprint arXiv:1910.00935, 2019a. Yuanming Hu, Jiancheng Liu, Andrew Spielberg, Joshua B Tenenbaum, William T Freeman, Jiajun Wu, Daniela Rus, and Wojciech Matusik. Chainqueen: A real-time differentiable physical simulator for soft robotics. In 2019 International conference on robotics and automation (ICRA), pp. 6265–6271. IEEE, 2019b. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pp. 9118–9147. PMLR, 2022. Wenlong Huang, Chen Wang, Ruohan Zhang, Yunzhu Li, Jiajun Wu, and Li Fei-Fei. Voxposer: Composable 3d value maps for robotic manipulation with language models. arXiv preprint arXiv:2307.05973, 2023. Zhiao Huang, Yuanming Hu, Tao Du, Siyuan Zhou, Hao Su, Joshua B Tenenbaum, and Chuang Gan. Plasticinelab: A soft-body manipulation benchmark with differentiable physics. arXiv preprint arXiv:2104.03311, 2021. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023. Minae Kwon, Sang Michael Xie, Kalesha Bullard, and Dorsa Sadigh. Reward design with language models. arXiv preprint arXiv:2303.00001, 2023. Sizhe Li, Zhiao Huang, Tao Du, Hao Su, Joshua B Tenenbaum, and Chuang Gan. Contact points discovery for soft-body manipulations with differentiable physics. arXiv preprint arXiv:2205.02835, 2022. Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 9493–9500. IEEE, 2023.
S2EN8MCHiz
This paper argues that the maximum ID is a more critical indicator for performance prediction, which is against the observation that the ID of the last latent layer indicates model performance, but from Figure 4, the ID of the last latent layer can also indicate model performance.
Understanding Vision and Language Representations under the Lens of Intrinsic Dimension Anonymous authors Paper under double-blind review Abstract Current multimodal representation learning is mainly achieved by intuitive and heuristic approaches. However, the cooperation and the utility of each modality remain unclear. We empirically investigate the intrinsic dimension (ID) of a large-scale vision-language pre-training model BLIP and explore the relationships among intrinsic dimension, modality, and prunability. It is shown that the ID geometric characteristics of visual and language representations differ significantly in terms of range and shape, resulting in distinct prunability for each modality. Unified multimodal learning can be manifested as the overlay of ID variations of vision and language. By investigating the IDs of attention representations, it is evident that the current cross-modal attention mechanism struggles to embed modalities into the same low-dimensional manifold due to the varying levels of IDs between vision and language. Moreover, we study the contribution of different modalities toward model prunability and explore predicting model performance through the distributions of IDs. An importance metric based on ID is proposed, which yields superior performance for multimodal model pruning. The experimental results show that visual representations are more sensitive and fragile to pruning, while language representations are robust and, therefore, have a higher prunability. 90% BLIP weights in language modality can be pruned with only 3.8 drops on the CIDEr metric. Our observations suggest the potential for more effective pruning of multimodal models using intrinsic dimension (ID) as a guiding metric. 1 Introduction Multimodal pre-training integrates large-scale vision and language data to learn cross-modal representations. Various multimodal pre-training approaches are proposed to capture the semantic alignment and interaction between vision and language, such as ViLBERT (Lu et al., 2019), LXMERT (Tan & Bansal, 2019), and CLIP (Radford et al., 2021). These models adopt different levels of multimodal alignment and learning strategies. However, all these works rely on an implicit hypothesis that different modalities share a common semantic manifold, where concepts with similar semantics are close or even overlapping. Therefore, cross-modal representation learning can be achieved by measuring the similarity between different modalities. On the other hand, with the increasing number of parameters in pre-training models, these types of modality alignment (Wang et al., 2023; Dong et al., 2021; Liu et al., 2021a) have become more and more sophisticated and fine-grained. Although these methods lead to better generalization performance on downstream tasks, however, the fundamental hypothesizes of the common semantic manifold remain unexplored: Are the representations of vision and language evidently connected in the same semantic space? Specifically, how does the cross-modal attention mechanism, as fundamental for multimodal pre-training, capture information beyond mere vector matching? Moreover, how do the varying parameter counts of different modalities contribute to the overall model performance? To explore the above questions, it is necessary to have a unified and essential metric to quantify semantics. Although the high-dimensional representations are already a semantic abstraction of training data, their dimensions are manually fixed (e.g., 512, 1024) and thus cannot fully reflect the intrinsic semantic representations. The manifold hypothesis (Bengio et al., 2013) states that high- dimensional data of interest often live in an unknown lower-dimensional manifold embedded in ambient space, which enables the intrinsic dimension (ID) to be a further abstract representation of semantics. Unlike the feature dimension of networks, which is usually set to 512, 1024, and 2048 in common vision tasks. The estimated ID is generally below 150 for popular visual datasets (Pope et al., 2021; Brown et al., 2023) and visual representations (Ansuini et al., 2019). In this study, ID is adopted as a primary metric to observe the geometrical properties and their correlation with pre-training performance on image captioning tasks. Current work on intrinsic dimension mainly focuses on the unimodal, particularly the visual modality. Ansuini et al. (2019) investigates the ID profile of three common CNN-based pre-trained representations and finds the hunchback shape of the ID variation across the layers. Muratore et al. (2022) observes similar first-expansion-then-reduction of object representations along the rat homolog of the ventral stream. Pope et al. (2021) estimates the intrinsic dimensionality of several popular datasets and finds that common natural image datasets have very low intrinsic dimensions relative to the high number of pixels in the images. ID is used to study the semantic complexity of synthetic images by GAN (Pope et al., 2021; Horvat & Pfister, 2022; Barannikov et al., 2021), which allows actively manipulating the intrinsic dimension by controlling the image generation process. Brown et al. (2023) empirically verify the hypothesis of the union of manifolds in common image datasets and find that the data lies on a disconnected set with varying intrinsic dimensions. Amsaleg et al. (2017); Ma et al. (2018) use the local ID to characterize the adversarial robustness of attacked visual regions and find that the LID increases along with the increasing noise in adversarial perturbations. Compared with the large number of ID studies on visual information, including datasets and representations, there are fewer studies on the ID characteristics of language modality. Fine-tuning of the large language model (BERT (Kenton & Toutanova, 2019) and RoBERTa (Liu et al., 2019)) are analyzed from the ID perspective in Aghajanyan et al. (2020). Both theoretical and empirical explanations have been provided, pointing to a low-dimensional reparameterization that is as effective in fine-tuning as the full parameter space. Kvinge et al. (2023) focuses on the prompts for text-to-image generation. It demonstrates that prompt variations affect the intrinsic dimension of model layers in distinct ways. Bottleneck layers, instead of latent layers, correlate with prompt perplexity and intrinsic dimension. Tulchinskii et al. (2023) find that the average intrinsic dimensionality of fluent texts in natural language hovers around the value of 7 to 9 for human-generated texts, while the average intrinsic dimensionality of AI-generated texts for each language is around 1.5 or even lower. The clear statistical separation enables a simple classifier to distinguish human-generated and AI-generated texts. The aforementioned works are predominantly focused on unimodality. However, our work delves deeper into the large-scale vision-and-language pre-training model through the lens of intrinsic dimension. It provides a comprehensive understanding of cross-modal representations and their distinctive prunability. Specifically, the main contributions of this work are as follows. • To the best of our knowledge, this work presents the first empirical study into the ID of a large-scale multimodal pre-training model. The geometric properties of IDs for visual and language representations are significantly different. IDs of visual modality are varied with a hunchback shape, ranging from 29 to 180. In contrast, IDs of language modality are uniform with a lower range from 5 to 30. Cross-modal learning can be manifested by the overlay of these two ID variations. • This work provides a detailed explanation of how visual and language modalities align and change IDs in cross-modal attention mechanisms. We argue that the visual and language representations do not lie on the same low-dimensional manifold. Cross-modal attention struggles with embedding low-dimensional language representations into high-dimensional visual manifolds. • We explore the correlation between IDs and layer-wise importance for multimodal pruning. The experiment results demonstrate that utilizing the ID as an indicator for weight pruning yields a superior compression rate and model performance. Notably, pruning either vision or language modality leads to different changes in IDs. Language representations are more robust with higher prunability, while vision representations are more sensitive and have a greater impact on overall performance. 2 Estimating IDs of the BLIP Pre-training 2.1 BLIP Model We use BLIP as a surrogate model to investigate the characteristics of multimodal representations. BLIP (Bootstrapping Language-Image Pre-training) \cite{li2022blip} is a framework for vision-language pre-training (VLP) that extends CLIP (Contrastive Language-Image Pre-training) \cite{radford2021learning}, a contrastive learning method that learns from noisy web image-text pairs. Unlike CLIP, which only uses an encoder-based model, BLIP introduces a multimodal mixture of encoder-decoder (MED) architecture that can flexibly transfer to both understanding-based and generation-based tasks. It filters out noisy captions with a bootstrapping strategy in a CapFilt module and trains with three objectives: image-text contrastive learning, image-text matching, and image-conditioned language modeling. BLIP utilizes large-scale web data and human-annotated data to provide diverse and effective representations. BLIP consists of two unimodal streams: a vision model implemented by ViT \cite{dosovitskiy2022an} and a language model implemented by BERT \cite{devlin2018bert}, respectively. The two streams are connected by multimodal attention layers that allow cross-modal alignment by closing their difference in a shared embedding space. In our implementation, we use BLIP finetuned versions with ViT-Base/16 and CapFilt-Large. 2.2 TwoNN TwoNN algorithm \cite{facco2017estimating} is applied to measure the intrinsic dimension (ID) of the BLIP representations in each layer. ID is a measurement of the effective degrees of freedom or the information content of a set of data that can reveal the complexity and structure of the underlying low-dimensional manifold. The TwoNN algorithm is a simple yet robust and efficient method based on the ratio of distances to the nearest neighbors. Methods that rely on assumptions about the density or smoothness of the data manifold may not yield accurate estimates of the intrinsic dimension (ID) for high-dimensional data. However, the TwoNN algorithm provides a viable alternative by estimating the ID based solely on local information derived from nearest neighbors. This approach is impervious to scaling and rotation, and does not require any parametric assumptions regarding the distribution of the data. Furthermore, it avoids the need for complex optimization procedures, rendering it more flexible and efficient than the Maximum Likelihood Estimation (MLE) \cite{levina2004maximum} method and other approaches that rely on local eigenvalues or geometric properties. The intrinsic dimensionality is estimated by TwoNN through analyzing the distance between each point and its first and second nearest points. The distance between them is represented by $r_1$ and $r_2$, respectively. The quotient of these two distances, known as $\mu$, is always less than 1 by definition. However, $\mu$ increases as the ID increases. $\mu$ follows a Pareto distribution $Pa(d+1)$ where $d$ is the intrinsic dimension. The likelihood of multiplying sample of this distribution $\mu = (\mu_1, \mu_2, ..., \mu_N)$ can be calculated by $$P(\mu|d) = d^N \prod_{i=1}^{N} \mu_i^{-d-1}.$$ Data samples can be obtained by evaluating the network representations on a certain dataset and solving it as a linear regression problem. This method avoids assumptions on the global distribution other than the density is const around each point. This makes it a good fit for the estimation of ID for high-dimensional representations. 2.3 Apply TwoNN on BLIP Our implementation follows \cite{ansuini2019empirical} to perform the empirical study ID statistics of BLIP representations in each layer. The BLIP model consists of 12 blocks for both vision and language modalities, corresponding to ViT and BERT model, respectively. With the input of image and caption pairs, the activations of each layer is treated as a data point in its own linear space. ID of each layer is estimated separately under the same dataset MSCOCO \cite{lin2014microsoft}. Since the time and space complexity of TwoNN is $O(n_D^2)$, the size of dataset $n_D$ is critical for effective estimation. Facco et al. (2017) empirically suggest to use around 10 times the intrinsic dimension. In our implementation, 2,000 samples are used. The ID estimation of BLIP representations is shown in Figure 1, which shows distinct distributions across vision and language representations. Detailed analysis of ID and multimodal characteristics are demonstrated in Section 3. ![Figure 1](image1.png) **Figure 1:** ID variations of the BLIP pre-training model ID across layers using the TwoNN estimator with 2,000 samples. Error bars are the standard deviation of the ID. We label the layers in the first block of each modality, and subsequent blocks follow the same repeating design. A and CA denote attention and cross-attention, respectively. ### 3 UNDERSTANDING CROSS-MODALITY LEARNING VIA IDs #### 3.1 IS THERE A HUNCHBACK IN TRANSFORMER-BASED VISUAL REPRESENTATIONS? Ansuini et al. (2019); Muratore et al. (2022) have shown that visual representations in CNN-based models (e.g., VGG and ResNet) exhibit a typical hunchback shape in terms of ID variation across layers. However, it is unclear if this pattern holds true for Transformer-based visual models. We compare the estimated IDs of two Transformer-based models, including BLIP ViT (Li et al., 2022) and VLP (Zhou et al., 2020), to the CNN-based models, including VGG-16 (Simonyan & Zisserman, 2015) and ResNet-152 (He et al., 2016). BLIP and VLP are implemented by dual-stream and single-stream multimodal learning paradigms. They are adopted to eliminate the impact of modality fusion methods. Since the significant structural and layer number differences among networks, they are compared on the relative depths. To make a deeper analysis, the IDs are separated by layer types. The results are shown in Figure 2. ![Figure 2](image2.png) **Figure 2:** 1) ID comparisons of BLIP-ViT, VLP, VGG-16-bn, and ResNet-152 in relative depth. 2) Comparing IDs by layer types for BLIP-ViT model. q, k, and v denote query, key, and value layers in attention. proj and fc denote the projection layer and fully connected layers. Comparing the BLIP IDs with VGG and ResNet, their variations show similar hunchback-shaped profiles. However, the peak location of BLIP lags slightly behind ResNet and VGG, moving from 0.2 to 0.4. VLP’s hunchback is positioned further to the right side, and its values are much lower compared to other models. Overall, Transformer-based models have a wider range of IDs than... CNN-based models. The delayed peak and lower value of Transformer-based models’ IDs can be explained by integrating textual features, particularly in the single-stream VLP model which fuses visual and language at the beginning. As described in Section 3.2, language representations have consistent and lower IDs. We have three main observations according to Figure 2. 1) All of each layer show hunchback profiles but with different distributions. 2) For attention layers (q, k, and v), there is a hunchback but its peak is lagged. 3) For MLP layers (fc1 and fc2), there are two hunchbacks, while the latter one is lower. These observations further verify the aforementioned explanations of multimodal learning. That is, the combination of vision and language representations can be described by the overlay of IDs for each modality. ### 3.2 Statistics of ID for Language Representations A language block consists of two attention modules: self-attention and cross-attention, as well as two feed-forward modules. Each attention module is composed of query, key, value, and fully connected layers. On the other hand, the feed-forward modules only comprise a fully connected layer. The estimation of IDs for language representations follows the same procedure as that of visual representations. As illustrated in Figure 3, Most ID values for language layers are low, typically ranging from 5 to 30. However, the k and v layers of cross-attention exhibit ID values ranging from 70 to 90. Despite being located in the language model, the inputs to k and v layers are visual representations. Compared to visual modality, the ID values of language representations are lower and tend to remain stable across layers. ![Figure 3: IDs of each language layer in BLIP. a, ca, and ff denote attention, cross attention, and feed-forward. q, k, and v denote query, key, and value. im and op denote intermediate and output.](image) ### 3.3 Interpreting Cross-modal Attention via IDs A common cross-attention mechanism for visual captioning tasks involves a three-step process: Firstly, similarity scores are computed between representations of vision and language; Secondly, attention weights are applied to these similarity scores to obtain a weighted vision representation. Finally, the vision representation is projected onto a word embedding space. Although this process provides an effective way of visual and language integration, the validity of the inner similarity estimation is unclear. Liu et al. (2021b); Hoover et al. (2019) use visualization techniques to examine cross-modality alignment, but these methods are insufficient for providing an objective and thorough evaluation. We analyze these three steps by investigating ID variations. The first step is projecting the language representation and the visual representation to query (q) and key (k) layers. The language representations (q) lie on a low-dimensional manifold since the language ID is typically around 25. On the other hand, the IDs of the visual representations (k) do not decrease but rather increase from 40 (ID of the last visual layer) to 80. This implies that the k layer expands the visual representation to a higher-dimensional manifold that does not coincide with the language manifold. From such a perspective of ID, cross-modal similarity estimation can be viewed as a process of embedding the low-dimensional language representation into the high-dimensional visual manifold. However, this process may not be effective as the IDs of k layers show that the manifolds of these two modalities are far apart, even though the embedding dimensions of q and k are both 728. The second step of the cross-attention mechanism involves projecting the visual representation to a value (v) layer which is the output of the cross-attention. This output representation is conceptually the closest to a cross-modal representation since it is directly supervised by the caption loss. As illustrated in Figure 2, the IDs of the v layers (the light purple line with triangles) are slightly lower than those of the k layer (the deep purple line with triangles) but still much higher than those of other layers. The third step involves a fully connected layer that projects the cross-modal representation to a language representation. The cross-modal representation is reduced to a low-dimensional manifold, which has a similar ID with other pure language layers. Overall, cross-attention can be viewed as a process that initially rises and then declines in intrinsic dimension. The final ID value is heavily impacted by the output modality. Typically, the visual modality yields a higher ID than language. These discoveries suggest ways to improve cross-modal attention mechanisms. For instance, by increasing the overlap between two manifolds, it is possible to achieve better alignment of vision and language representations. 4 PRUNING MULTIMODAL PRE-TRAINING WITH ID METRIC As mentioned in Section 3, it is widely recognized that the ID is noticeably smaller than the embedding dimensions, often by a factor of ten or even a hundred. Despite feature dimensions ranging from 512 to 2048, their corresponding IDs typically fall below 180 in the BLIP model. In light of this, we explore the possibility of leveraging the ID metric for model pruning. High-value IDs can be the results of two opposite conditions. One is that the training data contains so complex information that requires more dimensions to describe it. Another one is that the network learns meaningless or disordered representations, such as noise or adversarial perturbations, which lead to poor generalization. The pruning strategies for these two scenarios are completely opposite, and it depends on the features the “redundant” weights correspond to. If it is the former, then a higher ID indicates greater importance and less pruning, while if it is the latter, the opposite is true. Both views present supporting evidence (Muratore et al., 2022; Ankner et al., 2022). However, there is a lack of systematic studies currently, particularly with respect to targeting multimodal representations. To investigate the relationships among pruning, ID, and model performance, we arrange a series of experiments with different pruning strategies on BLIP pre-training. Implementation details. We conduct our experiments using PyTorch on 4 NVIDIA GeForce GTX 3090 GPUs. The number of samples is 2,000, using TwoNN for ID estimation. The multimodal pre-train model used for pruning is BLIP with ViT-B/16. We perform pruning over the course of 5 epochs. During the first epoch, we train the model without pruning. In epochs 2 to 4, we use the cubic schedule (Sanh et al., 2020) to control the pruning rate at each step. The target pruning rate will be reached by the end of the fourth epoch. In the fifth epoch, the entire model is iteratively pruned at the target pruning ratio. Datasets. We evaluate the performance of pruned models on image captioning task with MSCOCO datasets (Lin et al., 2014). MSCOCO is a large-scale dataset for object detection, segmentation, keypoint detection, and captioning. It covers 80 object categories and 91 stuff categories. We follow the standard splits of MSCOCO 2017, which uses 118K images for training and 5K images for both validation and testing. Each image is paired with 5 human annotated captions. We use five common metrics for evaluation: CIDEr (Vedantam et al., 2015), BLEU (Papineni et al., 2002), METEOR (Banerjee & Lavie, 2005), ROUGE (Lin, 2004), and SPICE (Anderson et al., 2016). 4.1 DOES PRUNING REDUCE OR INCREASE THE ID? Muratore et al. (2022) argues that pruning luminosity and contrast information in visual representations increases the ID value, while Ankner et al. (2022) argues that the prunability of the neural network decreases as the ID increases. To resolve this discrepancy, we empirically verify it with two important metrics for weight pruning. Magnitude (Zhu & Gupta, 2017) is a gradual pruning method that parameters with small magnitude are pruned. The pruning is iteratively conducted for 5 epochs. Magnitude w/o finetune uses the same metric but with only one-time pruning without fine-tuning. Sensitivity (Zhang et al., 2022) is an iterative pruning method that takes both gradient and uncertainty into consideration for importance evaluation. It is fine-tuned for 5 epochs after pruning. Table 1 and Figure 4 show the model performance across different metrics and the corresponding IDs when the pruning ratio is 80%, respectively. In most layers, the full BLIP model has larger ID values compared to other pruned models. On the other hand, the Mag w/o ft model has the smallest ID values across most layers. Accordingly, the full BLIP model exhibits the best model performance, while the Mag w/o ft model has the worst performance. When comparing the ID values of Mag and Sens models, instability is observed across layers. In the vision layers, Mag’s IDs are considerably larger than Sens’s in the first three blocks. However, after the hunchback peak, the IDs of Sens surpass Mag’s. In the language layers, the IDs of both models are mostly close to each other, but Sens’s peak ID value is larger than Mag’s. Overall, we have the following observations according to the comparisons between IDs and the performance of pruned models: 1) Pure pruning significantly decreases IDs of almost all layers, while fine-tuning increases IDs. 2) The vision modality has more significant ID decreases than the language modality. 3) ID values have a positive correlation with model performance but not in direct proportion. We argue that the maximum ID is a more critical indicator for performance prediction, which is against the observation that the ID of the last latent layer indicates model performance (Ansuini et al., 2019). Table 1: Model performance using different weight importance metrics when pruning ratio is 80%. BLIP is the full model before pruning, Mag, Mag w/o ft, and Sens are after pruning models. Mag, Sens, and B denote Magnitude, Sensitivity, and BLEU. | Model | CIDEr | B@1 | B@2 | B@3 | B@4 | METEOR | ROUGE | SPICE | |-------------|-------|------|------|------|------|--------|-------|-------| | Full BLIP | 133.3 | 78.9 | 63.7 | 50.5 | 39.7 | 30.9 | 60.0 | 23.8 | | Mag w/o ft | 0.4 | 9.0 | 0.2 | 5−7 | 9−10 | 1.7 | 9.2 | 0.0 | | Mag | 77.6 | 65.1 | 47.7 | 34.9 | 25.8 | 22.7 | 49.3 | 15.5 | | Sens | 124.1 | 76.9 | 61.5 | 48.2 | 37.6 | 29.6 | 58.5 | 22.6 | Figure 4: ID variations of pruned models with different importance metrics: Magnitude (Zhu & Gupta, 2017), Magnitude without finetuning, and Sensitivity (Zhang et al., 2022), with 80% pruning ratio on BLIP pre-training. 4.2 CAN ID PREDICT LAYER IMPORTANCE? In Section 4.1, we study the impact of pruning on ID values. On the contrary, in this section, we investigate how incorporating ID in the importance metric affects the performance of pruned models. The PLATON (Zhang et al., 2022) proposes a sensitivity metric to measure the importance of each weight. We multiply ID, as the layer importance, with the Sens metric to verify whether it improves the pruning performance. Also, we provide results of Sens/ID to analyze the effectiveness of ID for weight pruning. Figure 5 and Table 2 show the ID value and corresponding performance comparisons, respectively. In Table 2, it is evident that using the Sens*ID pruning strategy leads to better performance results for all evaluation metrics in comparison to using the original Sens metric. However, the Sens/ID strategy results in a significant decrease in performance. Figure 5 shows that Sens*ID considerably increases the IDs of all visual representations, including all layers in the visual model and the k and v layers. Table 2: Performance comparison with different weight importance metrics: Sens (Zhang et al., 2022), Sens*ID, and Sens/ID at 80% pruning ratio | Model | CIDEr | B@1 | B@2 | B@3 | B@4 | METEOR | ROUGE | SPICE | |-----------|-------|------|------|------|------|--------|-------|-------| | Full BLIP | 133.3 | 78.9 | 63.7 | 50.5 | 39.7 | 30.9 | 60.0 | 23.8 | | Sens | 124.1 | 76.9 | 61.5 | 48.2 | 37.6 | 29.6 | 58.5 | 22.6 | | Sens*ID | 129.2 | 78.7 | 63.6 | 50.1 | 39.1 | 30.2 | 59.4 | 23.3 | | Sens/ID | 75.0 | 65.0 | 47.2 | 34.2 | 25.2 | 22.3 | 48.9 | 15.1 | Figure 5: ID variations of pruned BLIP models with different importance metrics: Sensitivity, Sensitivity*ID, and Sensitivity/ID with 80% pruning ratio. in the language model. On the other hand, the IDs of pure language representations decrease due to the multiplication of IDs. Based on these observations, it can be concluded that incorporating ID improves the importance evaluation of weights, thereby enhancing the overall pruning performance. 4.3 Do vision and language contribute different prunability? To better understand the contribution of each modality to the overall performance of the pruned model, we assign the same pruning ratio to different modalities. First, we examine the pruning upper bounds of different modalities. Specifically, we conducted pruning experiments on BLIP’s single-modality (V and L) and the entire network (V+L) at various pruning rates (20%, 40%, 70%, 80%, 90%, and 95%). The CIDEr metric is used to evaluate the performance of the pruned model. The model performances are shown in Figure 6. It is observed that a pruning ratio of over 70% resulted in a significant decline in the efficacy of most models, except for the language model pruning alone with the Sens*ID metric (the light blue line with triangles). The models that only prune the visual modality exhibit the fastest decay, whereas the models that only prune the language modalities experience only a 3-6 drop in CIDEr compared to the full BLIP model, even when the pruning ratio is as high as 95%. Overall, the language models have a much higher upper-bound pruning ratio with different importance metrics. Figure 7 shows the change of IDs when pruning only one modality. Pruning any one modality alone will cause the ID of the other modality to change. Intriguingly, only pruning vision layers leads to ID decrease for both vision and language layers, while only pruning language layers leads to ID increase in vision layers (also including k and v layers in cross-attention) but a decrease in pure language layers. Figure 7: When one modality is pruned (red for vision, blue for language) at 80%, 90%, and 95% ratios, its effect on the ID changes of both modalities. Table 3 presents a detailed comparison of the performances between pruning single modality and the entire model. In general, it is observed a decline in model performance after pruning. However, pruning only the language model with an 80% pruning ratio results in an improvement in overall model performance across several evaluation metrics. Comparing Sens and Sens*ID, the improvement is significant for pruning vision, language, and both modalities. Comparing Sens and Sens/ID, the model performance of Sens/ID degrades significantly for only pruning vision models, whereas the performances of pruning language models are similar with both Sens and Sens/ID metrics. Based on the observations, we argue that language representations are more robust for pre-training. Even if some important weights in language layers are pruned, performance can be restored by fine-tuning, which leads to a higher lower bound of performance. On the contrary, visual representation is more sensitive and fragile, and incorrect importance metrics can noticeably degrade model performance. Table 3: When pruning vision model (V), language model (L), and the entire model (V+L) individually at 80% pruning ratio, performance comparison of pruned models over multiple metrics. | Model | Prune | CIDEr | B@1 | B@2 | B@3 | B@4 | METEOR | ROUGE | SPICE | |-----------|-------|-------|-----|-----|-----|-----|--------|-------|-------| | Full BLIP | None | **133.3** | 78.9 | 63.7 | 50.5 | **39.7** | **30.9** | **60.0** | **23.8** | | Sens | V | 107.9 | 72.8 | 56.5 | 43.4 | 33.4 | 27.5 | 55.3 | 20.4 | | | L | 128.9 | 78.6 | 63.6 | 50.1 | 39.0 | 30.0 | 59.3 | 23.0 | | | V+L | 124.1 | 76.9 | 61.5 | 48.2 | 37.6 | 29.6 | 58.5 | 22.6 | | Sens*ID | V | 110.5 | 73.2 | 57.1 | 44.0 | 33.8 | 27.9 | 55.8 | 20.7 | | | L | 132.8 | **79.4** | **64.3** | **50.9** | **39.7** | 30.6 | 59.9 | **23.8** | | | V+L | 129.2 | 78.7 | 63.6 | 50.1 | 39.1 | 30.2 | 59.4 | 23.3 | | Sens/ID | V | 101.1 | 71.0 | 54.3 | 41.3 | 31.5 | 26.0 | 54.0 | 19.4 | | | L | 128.7 | 78.7 | 63.7 | 50.0 | 38.7 | 28.9 | 59.3 | 22.9 | | | V+L | 75.0 | 65.0 | 47.2 | 34.2 | 25.2 | 22.3 | 48.9 | 15.1 | 5 CONCLUSION This work delves into the ID characteristics of the BLIP large-scale pretraining model for multi-modal learning. Also, the study investigates the potential of utilizing ID variations to determine the significance of each layer. Experimental results indicate that the ID of visual modalities generally exhibits a hunchback profile and has a broad range of ID values (29-180), while language modalities have a consistent distribution of ID values, with a range of 5-30 on each layer. The two modalities’ representations are semantically integrated through multi-modal learning, which can be represented by combining their ID distributions. ID can be used to evaluate the importance level of network layers and enhance pruning performance. Moreover, our work finds that language modality in large-scale BLIP pre-training has more redundant weights but is robust to pruning, while vision modality is sensitive but has a greater impact on overall model performance. REFERENCES Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. *arXiv preprint arXiv:2012.13255*, 2020. Laurent Amsaleg, James Bailey, Sarah Erfani, Teddy Furon, Michael E Houle, Milos Radovanovic, and Nguyen Xuan Vinh. The vulnerability of learning to adversarial perturbation increases with intrinsic dimensionality. In *2017 IEEE Workshop on Information Forensics and Security (WIFS)*, pp. 1–6, 2017. Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic propositional image caption evaluation. In *ECCV*, pp. 382–398, 2016. Zachary Ankner, Alex Renda, Gintare Karolina Dziugaite, Jonathan Frankle, and Tian Jin. The effect of data dimensionality on neural network prunability. *arXiv preprint arXiv:2212.00291*, 2022. Alessio Ansuini, Alessandro Laio, Jakob H Macke, and Davide Zoccolan. Intrinsic dimension of data representations in deep neural networks. In *Advances in Neural Information Processing Systems* 32, pp. 6391–6402, 2019. Satanjeev Banerjee and Alon Lavie. Meteor: an automatic metric for mt evaluation with improved correlation with human judgments. In *Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization*, pp. 65–72, 2005. Serguei Barannikov, Ilya Trofimov, Grigorii Sotnikov, Ekaterina Trimbach, Alexander Korotin, Alexander Filippov, and Evgeny Burnaev. Manifold topology divergence: a framework for comparing data manifolds. *Advances in Neural Information Processing Systems*, 34:7294–7305, 2021. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. *IEEE transactions on pattern analysis and machine intelligence*, 35(8):1798–1828, 2013. Bradley C.A. Brown, Anthony L. Caterini, Brendan Leigh Ross, Jesse C. Cresswell, and Gabriel Loaiza-Ganem. Verifying the union of manifolds hypothesis for image data. In *ICLR*, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Xiao Dong, Xunlin Zhan, Yangxin Wu, Yunchao Wei, Michael C. Kampffmeyer, Xiaoyong Wei, Minlong Lu, Yaowei Wang, and Xiaodan Liang. M5product: Self-harmonized contrastive learning for e-commercial multi-modal pretraining. *arXiv preprint arXiv:2109.04275*, 2021. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations*, 2022. Elena Facco, Manlio d’Errico, Alex Rodriguez, and Alessandro Laio. Estimating the intrinsic dimension of datasets by a minimal neighborhood information. *Scientific reports*, 7(1):12140, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, June 2016. Ben Hoover, Hendrik Strobelt, and Sebastian Gehrmann. Visualizing attention in transformer-based language representation models. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations*, pp. 37–42, 2019. Christian Horvat and Jean-Pascal Pfister. Intrinsic dimensionality estimation using normalizing flows. In *NeurIPS* 2022, 2022. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of naacL-HLT*, volume 1, pp. 2, 2019.
b8zji8TBN3
The authors claimed that the disadvantage of uncertainty estimation methods is the high uncertainty for small data regions. “Uncertainty in areas where p(x) is small is often very high, capturing epistemic uncertainty (a.k.a knowledge uncertainty), and this high uncertainty may be unnecessary and potentially worsen the MSE estimation error.” I do not follow the argument here. Does it mean we should not consider epistemic uncertainty?
A One-Step MSE Estimation of Models in Production Anonymous authors Paper under double-blind review Abstract In real-world operation of machine learning systems, monitoring the performance of prediction models is crucial. However, in these scenarios, actual values of target variables are observed with a delay, making real-time evaluation of prediction performance impossible. In this paper, we propose a novel one-step Mean Squared Error (MSE) estimation method that directly and tightly minimizes the upper bound of the MSE estimation error for regression tasks. Due to its direct estimation, our method is more efficient at estimating MSE compared to the conventional two-step approach, which approximates the mean and variance of the target variable. We also provide generalization error bounds for our proposed method based on a theoretical analysis. Our experiments demonstrate the effectiveness of our method, outperforming existing methods on both synthetic and real data sets. 1 Introduction Machine learning (ML) has become widespread in real-world applications. To maximize the benefits of predictions made by machine learning models and minimize the damage caused by prediction errors, it is crucial to continuously monitor the predictive performance of these models [Kreuzberger et al., 2023; Ruf et al., 2021; Testi et al., 2022; Symeonidis et al., 2022]. However, in real-world scenarios, the actual values of objective variables often experience a delay before observation [Plasse & Adams, 2016; Grzenda et al., 2020]. Consequently, estimating the predictive performance of the model is essential for quality control in ML systems. Estimation of prediction performance is commonly performed for classification tasks. Several works based on distribution shifts [Elsahar & Gallé, 2019; Do et al., 2021; Deng & Zheng, 2021; Schelter et al., 2020; Techapanurak & Okatani, 2021; Chen et al., 2021b] and “check” models, which are used to verify the correctness of predictions made by the prediction model, have been proposed [Chuang et al., 2020; Chen et al., 2021a]. However, these methods pose challenges when applied to regression tasks because regression models deal with continuous target values, whereas classification models handle discrete target values. One approach to estimating the Mean Squared Error (MSE), a prevalent evaluation metric for regression tasks, involves approximating the mean and standard deviation (or the confidence interval, the uncertainty) of the target variable ($y$) for each of input variables ($x$). Subsequently, MSE can be estimated by averaging the squared differences between the approximated mean and the predictions made by the prediction model, plus the squared value of the approximated standard deviations (i.e., variances) for the evaluation inputs. Mean and variance estimation models [Nix & Weigend, 1994; Skafte et al., 2019] and uncertainty estimation methods [Lakshminarayanan et al., 2017; Wang et al., 2019; Liu et al., 2019; Rasmussen & Williams, 2005] are suitable for this MSE estimation approach. However, this approach requires learning two separate models for estimating the MSE: one for means and the other for standard deviations. Generally, we can compute the MSE of a prediction model with the mean and standard deviation for each input, but we cannot compute the mean and standard deviation using only the MSE or even the squared errors for each input. This implies that estimating means and standard deviations is more challenging than directly estimating the MSE. Following Vapnik’s principle [Vapnik, 1998], which states that when solving a problem with limited information, one should not solve a more general problem than the original problem as an intermediate step, this two-step approach should be avoided due to its added complexity. In this paper, we propose a novel one-step approach that directly estimates MSE based on the average of squared differences between the predictions made by the prediction model and those made by our "check" model. We derive an objective function that tightly bounds the MSE estimation error from above and train our check model by minimizing the objective. Our method effectively estimates MSE more efficiently than conventional two-step approaches due to its one-step direct estimation. Furthermore, we provide a theoretical analysis regarding the upper bound of our method’s generalization error. We also propose a regularization term which aids in the learning of our check model. Experiments conducted using both synthetic and real-world benchmark data sets confirm that our method achieves the lowest MSE estimation error for all data sets. Our key contributions are summarized as follows: - We formulate the MSE estimation problem and propose a one-step estimation method that directly and tightly minimizes the MSE estimation error. - We theoretically analyze our method and provide an upper bound of the generalization error for our MSE estimation approach. - We conduct MSE estimation experiments using both synthetic and real-world data sets, empirically demonstrating the superior performance of our method compared to the conventional two-step approaches. 2 PRELIMINARY In this section, we introduce the problem definition and relevant previous methods briefly. 2.1 PROBLEM FORMULATION We consider a supervised regression problem; the input space is \( X \subseteq \mathbb{R}^d \) with a positive integer \( d \) and the output space is \( Y \subseteq \mathbb{R} \). Let \( D_{tr} = \{(x_{tr,i}, y_{tr,i})\}_{i=1}^{n} \) be training samples drawn from a training distribution whose density is \( p_{tr}(x, y) \) in an i.i.d. fashion. A prediction model \( f : X \rightarrow Y \) is trained using \( D_{tr} \) in a training phase. In an operational phase after the training phase, the prediction model \( f \) is used to predict output values of corresponding inputs \( U_{op} = \{x_{op,i}\}_{i=1}^{m} \), drawn from an operational distribution \( p_{op}(x) \). The mean squared error (MSE) of \( f \) over the joint operational distribution \( p_{op}(x, y) \) is defined as the expectation as follows: \[ \text{MSE}(f) := \mathbb{E}_{p_{op}(x,y)}[(y - f(x))^2], \] where \( \mathbb{E}_{p(x,y)}[g(x,y)] \) computes the expectation of \( g(x,y) \) over the density \( p(x,y) \) as \( \int_{X \times Y} g(x,y) dp(x,y) \). We intend to estimate MSE\( (f) \) after the prediction and before observing the actual values of target variables, i.e., estimate MSE\( (f) \) using \( f, D_{tr}, \) and \( U_{op} \). However, MSE estimation without any assumption is infeasible. Hence, we employ the covariate shift assumption, which is a prevalent setting for machine learning in the wild. The problem definition is then formalized as follows. **Definition 1 (MSE estimation problem).** Given a regression model \( f : X \rightarrow Y \) trained with training data \( D_{tr} = \{(x_{tr,i}, y_{tr,i})\}_{i=1}^{n} \), i.i.d. samples from a training distribution \( p_{tr}(x, y) = p_{tr}(x)p(y|x) \), and operational data \( U_{op} = \{x_{op,i}\}_{i=1}^{m} \), i.i.d. samples from an operational distribution \( p_{op}(x) \), the task is to estimate MSE\( (f) \) with \( p_{op}(x, y) = p_{op}(x)p(y|x) \). 2.2 RELATED WORKS We categorize the related works into three groups: mean and variance estimation methods, uncertainty estimation methods, and accuracy estimation methods for classification. **Mean and variance estimation methods.** Eq. (1) can be expanded as \[ \text{MSE}(f) = \mathbb{E}_{p_{op}(x)}[\mathbb{E}_{p(y|x)}[(y - f(x))^2]] = \mathbb{E}_{p_{op}(x)}[(u(x) - f(x))^2 + \sigma(x)^2], \] where we define \( u(x) := \mathbb{E}_{p(y|x)}[y] \) and \( \sigma(x)^2 := \mathbb{E}_{p(y|x)}[(y - u(x))^2] \). Based on the expansion, one can estimate MSE by approximating both \( u(x) \) and \( \sigma(x) \). In other words, MSE can be estimated by a two-step approach; one step for computing \( u(x) \) and the other step for computing \( \sigma(x) \). Nix & Weigend (1994) propose to train a mean and variance networks, where two networks each learns mean and variance, which jointly trained by maximizing the log likelihood for the training data. Skafte et al. (2019) further adopt (a) locally-aware mini-batching scheme with adjusted sample weights, (b) mean and variance split training, (c) estimating Inv-Gamma distribution for $\sigma(x)$ instead of the point estimation of $\sigma(x)$, and (d) extrapolation architecture. These methods successfully captures means and variances when abundant numbers of samples are available. However, as described in the introduction, the problem is that estimating means and variances is rather difficult problem than solely solving the MSE estimation problem. According to Vapnik’s principle, this approach should be avoided. **Uncertainty estimation methods.** Recently, uncertainty estimation methods have been actively studied in the field of machine learning (Abdar et al., 2021). Monte Carlo dropout (Gal & Ghahramani, 2016) employs dropout (Srivastava et al., 2014) as an approximation of Bayesian neural networks and is utilized for uncertainty estimation (Wang et al., 2019; Liu et al., 2019). Lakshminarayanan et al. (2017) proposed a technique called DeepEnsemble, in which neural networks are trained with varying random initializations and then ensembled to achieve a high capability for estimating uncertainty. Malinin et al. (2021) also investigated uncertainty estimation for gradient boosting models, proposing an ensemble method for gradient boosting decision trees. Gaussian processes (Rasmussen & Williams, 2005) are considered to be part of uncertainty estimation methods with built-in uncertainty estimation. These methods output the mean and uncertainty, which may be related to the variance. Consequently, we can use Eq. (2) to estimate the MSE. However, uncertainty in areas where $p(x)$ is small is often very high, capturing epistemic uncertainty (a.k.a knowledge uncertainty), and this high uncertainty may be unnecessary and potentially worsen the MSE estimation error. **Accuracy estimation methods for classification.** Similarly to the MSE estimation problem, the accuracy estimation problem has been studied for classification tasks. Several works have proposed estimating accuracy based on the distribution shift of the input variables between the training and operational data (Elsahar & Gallego, 2019; Do et al., 2021; Deng & Zheng, 2021; Schelter et al., 2020; Techapanurak & Okatani, 2021). Chen et al. (2021b) employed domain adaptation methods for better estimation. Recently, the use of “check” models, which verify the predictions of the model, has been proposed and achieved superior performance in accuracy estimation for image and text classification tasks compared to the methods based on distribution shifts (Chuang et al., 2020; Chen et al., 2021a). These accuracy estimation methods cannot be directly applied to the MSE estimation problem due to the differences in the nature of classification and regression tasks, i.e., discrete versus continuous target variables. ### 3 Proposed Method In this section, we introduce a new method to estimate MSE by directly minimizing the MSE estimation error. Furthermore, through our theoretical analysis, we present a generalization error bound for this approach. #### 3.1 Upper Bounding the MSE Estimation Error We estimate MSE by the expectation over $p(x)$ of the squared error between $h(x)$ and $f(x)$ as $$\widehat{\text{MSE}}(f; h) = \mathbb{E}_{p_{\text{pop}}(x)} \left[ 2(h(x) - f(x))^2 \right],$$ which replace the variable $y$ in Eq. (1) with the output of another model $h$. We call $h$ as “check model” in this paper. The MSE estimation error $E(h)$ is then defined as the absolute error between the true MSE $\text{MSE}(f)$ and the estimated MSE $\widehat{\text{MSE}}(f; h)$. $$E(h) := \left| \text{MSE}(f) - \widehat{\text{MSE}}(f; h) \right|$$ We aim at minimizing $E(h)$ with regard to $h$. In the followings, we derive a training objective for $h$ which directly minimizes $E(h)$. Our analysis is based on an inequality regarding the squared expectation and the expectation of the squared values as stated in Lemma 1. Lemma 1. The following inequality holds: \[ (\mathbb{E}[x])^2 \leq \mathbb{E}\left[s(x)x^2\right] \leq \mathbb{E}\left[x^2\right] \leq 2\mathbb{E}\left[s(x)x^2\right], \] where \( s(x) := 1_{(x \geq 0 \land \mathbb{E}[(x)^+2] \geq \mathbb{E}[(-x)^+2)] \lor (x < 0 \land \mathbb{E}[(x)^+2] < \mathbb{E}[(-x)^+2)]} \) and \((x)^+ := \max(0, x)^2\). Note \(1_c\) denotes the indicator function and \(1_c = 1\) if the condition \(c\) is true, otherwise 0. The proof is based on a direct calculation using Jensen’s inequality and its details are presented in Appendix. The MSE estimation error \(E(h)\) can be rewritten as \[ E(h)^2 = \left| \mathbb{E}_{p_{op}(x,y)}[(y - f)^2] - \mathbb{E}_{p_{op}(x)}[2(h - f)^2] \right|^2 = (\mathbb{E}_{p_{op}(x,y)}[e_f(h,x,y)])^2 \] where \(e_f(h,x,y) := (y - f(x))^2 - 2(h(x) - f(x))^2\) is the difference between the squared error of \(f\) for sample \((x,y)\) and its estimation \(2(h(x) - f(x))^2\) computed by \(h\). We apply Lemma 1 to Eq. (6) and obtain the following theorem. Theorem 2. Let us define \(K(h), K^+(h), K^-(h)\) and \(K^*(h)\) as \(K(h) = \mathbb{E}_{p_{op}(x,y)}[e_f(h,x,y)^2]\), \(K^+(h) = \mathbb{E}_{p_{op}(x,y)}[(e_f(h,x,y))^+]\), \(K^-(h) = \mathbb{E}_{p_{op}(x,y)}[(-e_f(h,x,y))^+]\), and \(K^*(h) = \mathbb{E}_{p_{op}(x,y)}[s(e_f(h,x,y))e_f(h,x,y)^2]\), where \(s(t) = 1_{(t \geq 0 \land K^+(h) \geq K^-(h)) \lor (t < 0 \land K^+(h) < K^-(h))}\). Then, the MSE estimation error \(E(h)\) is bounded as \[ E(h)^2 \leq K^*(h) \leq K(h) \leq 2K^*(h). \] We omit the proof since the theorem is obvious. Theorem 2 indicates that the MSE estimation error is upper bounded by \(K^*(h)\) and this bound is tighter than the bound \(K(h)\), which is obtained by the naive derivation by Jensen’s inequality. \(K^*(h)\) is a promising objective function to train \(h\) for the MSE estimation problem. In practice, we minimize the empirical version of \(K^*(h)\). However, it requires samples \((x_{op}, y_{op})\) over the operational density \(p_{op}(x,y)\) and the target variables \(y_{op}\) are unavailable by the definition of the problem. Hence, we exploit \(D_{tr}\) to train \(h\). Note that using \(D_{tr}\) instead for \(D_{op}\) is practically valid under the absolutely continuous assumption where \(p_{tr}(x) = 0 \Rightarrow p_{op}(x) = 0\) (Fang et al., 2020) and a model can globally fit to the data (Quionero-Candela et al., 2009). The training objective is thus defined as \[ \hat{K}^*(h; D_{tr}) := \frac{1}{|D_{tr}|} \sum_{(x,y) \in D_{tr}} \hat{s}(e_f(h,x,y))e_f(h,x,y)^2, \] where \(\hat{s}(t) := 1_{(t \geq 0 \land \hat{K}^+(h; D_{tr}) \geq \hat{K}^-(h; D_{tr})) \lor (t < 0 \land \hat{K}^+(h; D_{tr}) < \hat{K}^-(h; D_{tr}))}\) with \[ \hat{K}^+(h; D_{tr}) := \frac{1}{|D_{tr}|} \sum_{(x,y) \in D_{tr}} (e_f(h,x,y))^+, \] \[ \hat{K}^-(h; D_{tr}) := \frac{1}{|D_{tr}|} \sum_{(x,y) \in D_{tr}} (-e_f(h,x,y))^+. \] We train \(h\) by minimizing \(\hat{K}^*(h; D_{tr})\) and predict MSE using \(U_{op}\) as \[ \widehat{\text{MSE}}(f; h; U_{op}) := \frac{1}{|U_{op}|} \sum_{x_{op} \in U_{op}} 2(h(x_{op}) - f(x_{op}))^2. \] Remark 3. Our method of training \(h\) with minimizing \(\hat{K}^*(h; D_{tr})\) is consistent against any overfitting of \(f\). Suppose that \(f\) is fully overfitting to \(D_{tr}\), i.e., for any \((x,y) \in D_{tr}, f(x) = y\) holds, and let \(u(x)\) be \(\mathbb{E}_{p(y|x)}[y]\) and \(\sigma(x)\) be \(\sqrt{\mathbb{E}_{p(y|x)}[(y - u(x))^2]}\). On one hand, the MSE of \(f\) is approximated as \(\text{MSE}(f) = \mathbb{E}_{p_{op}(x,y)}[(y - f(x))^2] \approx \mathbb{E}_{p_{op}(x)p(y|x)p(y'|x)}[(y - y')^2] = 2\mathbb{E}_{p_{op}(x)}[\sigma(x)^2]\) since \(f(x)\) can be considered to output a value \(y'\) of \((x', y')\) in \(D_{tr}\) which approximately follows \(p(y'|x)\). 1Covariate shift adaptation methods (Yamada et al., 2011; Kanamori et al., 2009; Zhang et al., 2020) can be used for better training of \(h\) using \(U_{op}\). However, it requires training \(h\) for every time \(U_{op}\) changes, which is costly for continuous monitoring in a typical MLOps situation. On the other hand, \( e_f(h, x, y) = 2(h(x) - y)^2 \) and \( \hat{K}^*(h) = \frac{4}{|\mathcal{D}_{tr}|} \sum_{(x,y) \in \mathcal{D}_{tr}} (h(x) - y)^4 \). The minimization of \( \hat{K}^*(h) \) leads \( h(x) \) to be close to \( u(x) \) for each \( x \). Then the estimated MSE becomes \[ \text{MSE}(f; h) = 2 \mathbb{E}_{p_{op}(x)}[(f(x) - u(x))^2] \approx 2 \mathbb{E}_{p_{op}(x)p(y|x)}[(y' - u(x))^2] = 2 \mathbb{E}_{p_{op}(x)}[\sigma(x)^2], \] which corresponds to the approximated MSE above. Hence, even when \( f \) is overfitting, our method precisely estimates the MSE. It should be noted that one of the most naive methods for one-step MSE estimation is to fit a check model \( h' \) to the square errors of the model by minimizing \( \frac{1}{n} \sum_{(x,y) \in \mathcal{D}_{tr}} ((y - f(x))^2 - h'(x))^2 \) and estimate MSE by \( \mathbb{E}_{p_{op}(x)}[h'(x)] \). However, it is evident that \( h' \) is biased. In an extreme case where \( f \) is perfectly fitted to \( \mathcal{D}_{tr} \), \( h' \) is trained to output 0 for any input, as \( (y - f(x))^2 = 0 \) for any \( (x,y) \in \mathcal{D}_{tr} \), and the estimated MSE is consistently 0. This estimation is incorrect unless there exists no new sample in \( \mathcal{D}_{op} \), i.e., \( (x,y) \in \mathcal{D}_{op} \Rightarrow (x,y) \in \mathcal{D}_{tr} \). Thus, it is a clear advantage that our method does not assume anything regarding \( f \) as Remark 3 states. ### 3.2 THEORETICAL ANALYSIS In this subsection, we establish an upper bound of the generalization error of the proposed method using the Rademacher complexity [Koltchinskii, 2001]. We assume \( p_{tr}(x) = p_{op}(x) = p(x) \) for simplicity. Firstly, we justify our method of minimizing \( \hat{K}^*(h; \mathcal{D}_{tr}) \) for the MSE estimation problem by Lemma 4. **Lemma 4.** Let \( f : X \rightarrow Y \) be a given regression model and \( \mathcal{H} \) be a family of functions mapping from \( X \) to \( Y \). Assume that (a) there exists a constant \( M > 0 \) such that \( |e_f(h, x, y)| \leq M \) holds for every \( h \in \mathcal{H} \) and \( (x,y) \in X \times Y \), (b) there exists some constant \( H > 0 \) such that \( |h(x) - f(x)| \leq H \) holds for every \( h \in \mathcal{H} \) and \( x \in X \). Then, for any \( \delta \in (0, 1) \), with probability \( 1 - \delta \) over the draw of an i.i.d. sample \( S \) of size \( n \) from \( p(x,y) = p(x)p(y|x) \), the following inequality holds for all \( h \in \mathcal{H} \): \[ E(h)^2 \leq \hat{K}^*(h; S) + 16HM \mathfrak{R}_n(\mathcal{H}) + M^2 \sqrt{\frac{\log \frac{1}{\delta}}{2n}}, \] where \( \mathfrak{R}_n(\mathcal{H}) \) is the Rademacher complexity of \( \mathcal{H} \) for the sampling of size \( n \) from \( p(x,y) \). The proof is based on Theorem 3.3 in [Mohri et al., 2018], the definition of the Rademacher complexity, and Ledoux-Talagrand contraction lemma [Ledoux & Talagrand, 2013], and presented in Appendix. Lemma 4 shows that minimizing \( \hat{K}^*(h; \mathcal{D}_{tr}) \) makes the upper bound of the MSE estimation error lower. Hence, our method solves the MSE estimation problem. Next, we provide an upper bound of the generalization error of a check model \( \hat{h} \) which is obtained by our method. Before stating the theorem, we prepare two lemmas regarding \( J(h) \). **Lemma 5.** The following inequality holds for every \( h \): \[ E(h)^2 \leq J(h) := \mathbb{E}_{p(x)} \left[ (\mathbb{E}_{p(y|x)}[(y - f(x))^2] - 2(h(x) - f(x))^2)^2 \right] \] **Lemma 6.** The following equality holds for every \( h \): \[ K(h) = J(h) + C_p, \] where \( C_p = \mathbb{E}_{p(x,y)}[(y - f(x))^4] - \mathbb{E}_{p(x)} \left[ (\mathbb{E}_{p(y|x)}[(y - f(x))^2])^2 \right] \geq 0 \) does not depend on \( h \). **Theorem 7.** Suppose that the assumptions made in Lemma 4 holds. Let \( h^* \) be a minimizer of \( K(h) \) and \( \hat{h} \) be a minimizer of \( \hat{K}^*(h; S) \). Assume that (a) the minimizer of \( \hat{K}^* \) makes \( \hat{K} \) lower than the minimizer of \( K \), i.e., \( \hat{K}(\hat{h}; S) \leq \hat{K}(h^*; S) \), and (b) the minimizer of \( K \) makes \( K^* \) lower than the minimizer of \( \hat{K}^* \), i.e., \( K^*(h^*) \leq K^*(\hat{h}) \). Then for any \( \delta \in (0, 1) \), with probability \( 1 - \delta \) over the draw of an i.i.d. sample \( S \) of size \( n \) from \( p(x,y) \), the following inequality holds: \[ E(\hat{h})^2 \leq J(h^*) + 32HM \mathfrak{R}_n(\mathcal{H}) + 4M^2 \sqrt{\frac{\log \frac{4}{\delta}}{2n}}. \] Suppose that we employ a model family $\mathcal{H}$ such that $\mathfrak{R}_n(\mathcal{H}) = O(1/\sqrt{n})$. Then we have $$E(\hat{h}) \leq \sqrt{J(h^*)} + O_p(1/\sqrt[4]{n}),$$ where $O_p$ denotes the order in probability. This shows that the MSE estimation error of the proposed method decreases at a rate of $n^{-1/4}$. If the lowest value of $J$ among $\mathcal{H}$ is small, i.e., $J(h^*)$ is small (remind that the minimizer of $J$ and $K$ are identical by Lemma 6), this guarantees a good performance of the proposed method in theory. ### 3.3 Practical Modification: A Regularization for Consistent Fitting In practical cases, training $h$ with $\hat{K}^*(h)$ may be suffered from unstable optimization due to the multimodality of $e_f(h, x, y)$. When $y \neq f(x)$, there exists two values of $h(x)$ that achieves $e_f(h, x, y) = 0$, i.e., $e_f(h, x, y) = 0 \Leftrightarrow h(x) = f(x) \pm \frac{1}{\sqrt{2}}|y - f(x)|$. Hence, the naive optimization of $\hat{K}^*(h)$ possibly makes $h$ non-smooth, e.g., $h$ fits to the higher one of the two as $h(x) = f(x) + \frac{1}{\sqrt{2}}|y - f(x)|$ at $x$, while at $x'$, $h$ may fit to the lower one as $h(x') = f(x') - \frac{1}{\sqrt{2}}|y' - f(x')|$. This inconsistency may worsen the MSE estimation error especially for the inputs between $x$ and $x'$. For better optimization of $h$, we encourage $h$ to always fit to the lower ones for every $x$ by adding the following regularization term: $$\hat{R}(h; D^{tr}) := \frac{1}{|D^{tr}|} \sum_{(x,y) \in D^{tr}} (f(x) - y)^2 \times (h(x) - (f(x) - \varepsilon))^+,$$ where $\varepsilon \in \mathbb{R}$ is a small constant, e.g., $1.0 \times 10^{-3}$. $\hat{R}(h; D^{tr})$ gives a penalty when $h(x)$ exceeds $(f(x) - \varepsilon)$. Note $(f(x) - y)^2$ disables the penalties when $y = f(x)$. The visualization of the effect of this regularization is demonstrated in Figure 1. Our regularization makes $h$ consistently lower than $h$ while $h$ trained solely with $\hat{K}^*(h; D^{tr})$ fits to both higher and lower values than $f(x)$ among the inputs. Our final objective function is, $$\hat{L}(h; D^{tr}) := \hat{K}(h; D^{tr}) + \lambda \hat{R}(h; D^{tr}),$$ where $\lambda \in \mathbb{R}_{\geq 0}$ is a hyperparameter whose default value can be 100. We use $\hat{L}(h; D^{tr})$ to train $h$ and estimate the MSE using Eq. (11). The quantitative differences between the use of $\hat{K}^*(h; D^{tr})$ and $\hat{L}(h; D^{tr})$ are reported in the next section for both synthetic and real-world benchmark data sets, and this regularization is confirmed to improve the MSE estimation particularly for real-world data sets. ### 4 Experiments We conduct experiments on synthetic and benchmark data sets to verify the effectivity of the proposed method. The implementation is based on PyTorch (Paszke et al., 2019) and scikit-learn (Pedregosa et al., 2011). All experiments are carried out on a computational server equipping four Intel Xeon Platinum 8260 CPUs with 192 logical cores in total and 1TB RAM. --- 2The basic experimental setting follows subSection 4.1 while we use 500 training samples and 5-layer NN for $f$ and $h$ in order to clarify the effect of $\hat{R}$. 4.1 Experiments on Toy Data Sets We first conduct experiments on synthetic toy data sets. **Data.** We generate three synthetic data sets, A, B, and C, visualized in Figure 2. For all data sets, we generate the input variables \( x \in \mathbb{R} \) from a normal distribution \( \mathcal{N}(0, 1^2) \), where \( \mathcal{N}(\mu, \sigma^2) \) denotes the Gaussian density with mean \( \mu \) and variance \( \sigma^2 \). Then, we use the following equation to generate the corresponding target variable \( y \) given \( x \): \[ y = (3x + 5)\sin(3x + 5) + 0.3(1 + \max(0, 3x + 5))\varepsilon, \] where \( \varepsilon \) is a noise. For the synthetic A data, we generate the noise as a normal Gaussian random variable, i.e., \( \varepsilon \sim \mathcal{N}(0, 1^2) \). For the synthetic B data, we employ the half-normal distribution, i.e., \( \varepsilon = |\varepsilon'| \) where \( \varepsilon' \sim \mathcal{N}(0, 1^2) \). Finally, for the synthetic C data, we use the Inv-Gamma distribution, i.e., \( \varepsilon \sim \text{Inv-Gamma}(\alpha, \beta) \) where \( \alpha \) is the shape parameter set to 2, and \( \beta \) is the scale parameter set to 0.5. The synthetic B data is designed to assess the robustness of the MSE estimation methods against non-Gaussian noises, while the synthetic C data is used to evaluate the effect of outliers. Each data set consists of 100 training and 10,000 operational samples. ![Visualization of the synthetic data sets A, B, and C.](image) Figure 2: Visualization of the synthetic data sets A, B, and C. The line of \( u(x) \) denotes \( y = (3x + 5)\sin(3x + 5) \) while \( \sigma(x) = 0.3(1 + \max(0, 3x + 5)) \) is the scale of noises. **Setting.** We use a three-layer feedforward neural network (3-layer NN) for prediction models. The number of units in the hidden layer is set to 64, the activation function placed after the input and hidden layers is ReLU (Nair & Hinton, 2010). We train the network for 200 epochs with the Adam optimizer (Kingma & Ba, 2015) with its learning rate 0.01 and weight decay \( 1 \times 10^{-3} \) (Hanson & Pratt, 1988). The batch size is set to 100, i.e., we employ full batch training. For estimating MSE, we train a 3-layer NN as \( h \) to minimize \( \hat{L}(h; D_{tr}) \) with 100.0. We also train \( h \) with minimizing \( \hat{K}(h; D_{tr}) \) and \( \hat{K}^*(h; D_{tr}) \) to clarify the benefits of the use of \( \hat{K}^* \) and \( \hat{R}(h; D_{tr}) \). As a baseline, we use the following methods: - **[M2V]** A naive baseline where a 3-layer NN is trained for mean by minimizing the training MSE first, and then another 3-layer NN is trained to estimate variance by maximizing the log likelihood, fixing the NN of mean estimation. - **[MVN]** Mean and variance network (Nix & Weigend, 1994). We use two 3-layer NNs to estimate means and variances, which are jointly trained by maximizing the log likelihood. - **[ENS]** DeepEnsemble (Lakshminarayanan et al., 2017). We train 10 MVN networks and compute the ensembled means and variances. - **[RVN]** Reliable estimation of variance networks (Skafte et al., 2019). We train the mean estimation network for 100 epochs, and then train the mean and three variance-related networks for 10 epochs using their mini-batch strategy, which corresponds to 100 epochs of a standard training. The hyperparameters follow the proposed values in the paper. The implementation is based on codes provided by the author (Skafte, 2019). - **[RVNnE]** A variant of RVN without its extrapolation architecture since epistemic uncertainty may worsen the results. - **[M2RVN]** A variant of RVN where its mean network is fixed after the first 100 epochs. The architecture and hyperparameters of NNs used in our method and baselines are the same with ones for prediction models described above. Table 1: Average absolute MSE estimation errors (.) for synthetic data sets over 100 trials. Numbers in brackets are the standard deviations. Boldface highlights the lowest error and comparable results based on the Wilcoxon signed-rank test (Wilcoxon, 1945) with a significance level of 5%. Asterisk and underline further spotlight the lowest and the second lowest errors, respectively. | Data set | Baselines | Ours | |------------|--------------------|------------| | | M2V | MVN | ENS | RVN | RVNnE | M2RVN | \( \hat{K} \) | \( \hat{K}^* \) | \( \hat{L} \) | | Synthetic A| 0.294 | 1.411 | 12.704 | 1.010 | 1.009 | 0.553 | 0.292 | **0.235** | 0.259 | | | (0.259)| (7.853) | (116.765)| (1.120) | (1.121) | (0.450) | (0.231) | *(0.208)* | (0.250) | | Synthetic B| 0.307 | 70.432 | 325.730 | 0.940 | 0.931 | 0.560 | 0.264 | **0.252** | 0.259 | | | (0.283)| (676.069)| (3143.811)| (0.808) | (0.798) | (0.450) | (0.201) | *(0.201)* | (0.226) | | Synthetic C| 0.751 | 25.897 | 118.872 | 0.799 | 0.819 | 0.541 | **0.407**| **0.412** | **0.425**| | | (2.351)| (190.993)| (493.593)| (0.724) | (0.750) | (0.411) | (0.365) | *(0.415)* | (0.426) | As a data preprocess, the target variables \( y \) in training and operational data are scaled based on the training data before being fed to the models. Then, MSE is computed and estimated for this scaled \( y \). The evaluation is based on the absolute error of the empirical MSE and estimated MSE, where the empirical MSE is computed using the operational data \( D^{op} \) as \( \text{MSE}(f, D^{op}) := \frac{1}{|D^{op}|} \sum_{(x,y) \in D^{op}} (y - f(x))^2 \). We conduct experiments 100 times for each data set generated with different random seeds, and report the average errors. We also perform statistical tests to clarify the significance of the results. **Result.** The results are presented in Table 1 displaying the average absolute MSE estimation errors over 100 trials. These findings demonstrate that none of the baselines achieve the best or even comparable results, highlighting the superiority of our one-step approach for MSE estimation. Our method minimizing \( \hat{K}^* \) achieves the lowest errors for synthetic A and B data, while the one minimizing \( \hat{K} \) is the most successful for synthetic C data. The results obtained with \( \hat{K}^* \) are either the second best or competitive. Although adding the regularization term \( \hat{R} \) does not improve MSE estimation compared to \( \hat{K}^* \), using \( \hat{L} \) still outperforms the naive upper bound \( \hat{K} \). Regarding the baselines’ results, MVN and ENS exhibit very high estimation errors for synthetic B and C data. Apart from our methods, M2V or M2RVN provide the lowest errors, indicating the effectiveness of separately training the mean and variance networks in a two-step process for these synthetic data sets. In terms of robustness against different types of noise, MVN and ENS are weak against non-Gaussian noise (Synthetic B and C), M2V and our methods appear vulnerable to outliers (Synthetic C), while RVN and its variants are stable for all types of noise. Further analyses, including qualitative evaluation and robustness of our method for overfitting and underfitting models are available in Appendix. ### 4.2 Experiments on Benchmark Data Sets We next perform experiments on benchmark data sets to demonstrate the usefulness of our method in real world. **Data.** We use mg, space-ga, cpusmall, cadata, and abalone data sets obtained from LIBSVM data sets (Chang & Lin, 2023). The data set statistics are summarized in Table 2. **Setting.** We select 1,000 continuous samples from a data set, starting with an index chosen uniformly at random. The first 500 samples are used as training data, while the remaining 500 samples serve as operational data. We train the prediction model, which is a 3-layer neural network (NN) using the Adam optimizer for 200 epochs. The number of hidden units, learning rate, weight | Name | #Samples | #Features | |-----------|----------|-----------| | mg | 1385 | 6 | | space-ga | 3107 | 6 | | abalone | 4177 | 8 | | cpusmall | 8192 | 12 | | cadata | 20640 | 8 | Table 3: Average absolute MSE estimation errors (↓) for benchmark data sets over 100 trials. Numbers in brackets are the standard deviations. Boldface highlights the lowest error and comparable results based on the Wilcoxon signed-rank test with a significance level of 5%. Asterisk and underline further spotlight the lowest and the second lowest errors, respectively. NA indicates that error exceeds 1000. | Data set | Baselines | Ours | |------------|-----------|------| | | M2V | MVN | ENS | RVN | RVNnE | M2RVN | $\hat{K}$ | $\hat{K}^*$ | $\hat{L}$ | | mg | 0.064 | 0.050 | 0.052 | 0.130 | 0.134 | **0.044** | 0.122 | 0.072 | **0.041** | | | (0.030) | (0.028) | (0.027) | (0.062) | (0.053) | *(0.034)* | (0.022) | (0.025) | *(0.026)* | | space-ga | 0.389 | 0.355 | 0.345 | 0.982 | 0.968 | 0.532 | 0.314 | **0.281** | **0.278** | | | (0.393) | (0.380) | (0.381) | (0.333) | (0.272) | (0.213) | (0.216) | (0.202) | *(0.393)* | | abalone | 0.995 | 0.964 | 0.962 | **0.935** | **0.971** | **0.921** | 0.992 | 0.929 | **0.919** | | | (1.372) | (1.379) | (1.376) | *(1.166)* | *(1.201)* | *(1.269)* | (1.361) | (1.314) | *(1.312)* | | cpusmall | **21.329** | NA | NA | 1.284 | 1.291 | 0.461 | 0.045 | **0.044** | **0.028** | | | *(212.062)* | | | *(0.258)* | *(0.297)* | *(0.094)* | *(0.038)* | *(0.038)* | *(0.028)* | | cadata | 56.048 | NA | NA | **33.894** | **24.318** | 34.637 | **9.086** | **11.560** | **14.082** | | | (405.424) | | | *(184.504)* | *(105.585)* | *(119.868)* | *(34.772)* | *(37.870)* | *(52.454)* | decay, and batch size are tuned for each data set via a hyperparameter search by Optuna (Akiba et al., 2019). The hyperparameters can be found in Appendix. The MSE estimation methods employed are the same as those used in the previous experiments, and the hyperparameters of the 3-layer NNs remain consistent with the ones tuned for each data set for the prediction models. We evaluate the absolute MSE estimation error between the empirical and estimated MSEs. The experiments are repeated 100 times with different starting indices and random seeds, and we report the average errors. **Result.** The results are presented in Table 3. The table indicates that one of our proposed MSE estimation methods achieves the lowest error for each data set. This demonstrates the superiority of our one-step MSE estimation over conventional two-step estimation approaches for real-world data sets, validating the intuition of Vapnik’s principle. M2V, MVN, and ENS are unstable for cpusmall and cadata, yielding very high MSE for some samples in the operational data, leading to high average and high standard deviations of errors. RVN and its variant RVNnE often perform worse than M2V, which is the most naive method. Comparing M2V (trains means then sigmas) vs MVN (jointly trains means and sigmas), and M2RVN (trains means then sigmas) vs RVN (iteratively trains means and sigmas), it appears that the joint and iterative training of mean and variance networks do not always outperform the two-step training. Among the three variants of our proposed methods, firstly, the results achieved by $\hat{K}^*$ are lower or comparable to those of $\hat{K}$ since $\hat{K}^*$ bounds $E^2$ more tightly and directly than $\hat{K}$, as discussed in the previous section. Furthermore, the results obtained with $\hat{L}$ are even better (or comparable) than those of $\hat{K}^*$. This confirms that our regularization improves the learning of the check model $h$ for real-world data set. ## 5 Conclusion In this work, we investigated the problem of MSE estimation. Instead of using existing two-step approaches, we proposed a novel one-step estimation method that directly and tightly upper bounds the MSE estimation error. Furthermore, we provided a theoretical upper bound for the generalization error of our method. Empirical results showed that our method yielded lower MSE estimation errors compared to the baseline methods for three synthetic and five benchmark data sets, thereby suggesting the superiority of our proposed method. REFERENCES Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U. Rajendra Acharya, Vladimir Makarenkov, and Saeid Nahavandi. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. *Information Fusion*, 76:243–297, 2021. Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In *ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, 2019. Chih-Chung Chang and Chih-Jen Lin. Libsvm data: Classification, regression, and multi-label. [https://www.csie.ntu.edu.tw/~cjilin/libsvmtools/datasets/](https://www.csie.ntu.edu.tw/~cjilin/libsvmtools/datasets/) 2023. Accessed: 2023-09-25. Jiefeng Chen, Frederick Liu, Besim Avci, Xi Wu, Yingyu Liang, and Somesh Jha. Detecting errors and estimating accuracy on unlabeled data with self-training ensembles. In *Advances in Neural Information Processing Systems*, volume 34, 2021a. Mayee Chen, Karan Goel, Nimit S Sohoni, Fait Poms, Kayvon Fatahalian, and Christopher Re. Mandoline: Model evaluation under distribution shift. In *International Conference on Machine Learning*, volume 139, pp. 1617–1629, 2021b. Ching-Yao Chuang, Antonio Torralba, and Stefanie Jegelka. Estimating generalization under distribution shifts via domain-invariant representations. In *International Conference on Machine Learning*, pp. 1984–1994, 2020. Weijian Deng and Liang Zheng. Are labels always necessary for classifier accuracy evaluation. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, pp. 1–1, 2021. Quynh Ngoc Thi Do, Judith Gaspers, Daniil Sorokin, and Patrick Lehnen. Predicting temporal performance drop of deployed production spoken language understanding models. In *Interspeech 2021*, 2021. Hady Elsahar and Matthias Gallé. To annotate or not? prediction of predicting performance drop under domain shift. In *Proceeding of the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing*, 2019. Tongtong Fang, Nan Lu, Gang Niu, and Masashi Sugiyama. Rethinking importance weighting for deep learning under distribution shift. In *Advances in Neural Information Processing Systems*, volume 33, 2020. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *International Conference on Machine Learning*, volume 48, pp. 1050–1059, 2016. Maciej Grzenda, Heitor Murilo Gomes, and Albert Bifet. Delayed labelling evaluation for data streams. *Data Mining and Knowledge Discovery*, 34(5):1237–1266, 2020. Stephen José Hanson and Lorien Y. Pratt. Comparing biases for minimal network construction with back-propagation. In *Advances in Neural Information Processing Systems*, pp. 177–185, 1988. Takafumi Kanamori, Shohei Hido, and Masashi Sugiyama. A least-squares approach to direct importance estimation. *Journal of Machine Learning Research*, 10(48):1391–1445, 2009. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *International Conference on Learning Representations*, 2015. V. Koltchinskii. Rademacher penalties and structural risk minimization. *IEEE Transactions on Information Theory*, 47(5):1902–1914, 2001. Dominik Kreuzberger, Niklas Kühl, and Sebastian Hirschl. Machine learning operations (mlops): Overview, definition, and architecture. *IEEE Access*, 11:31866–31879, 2023.
4yaFQ7181M
A discussion on how the anchor state computation rate $\Delta$ influences performance would be beneficial. Given that fluid dynamic systems can exhibit high-frequency changes, a delayed computation rate might fail to capture these rapid transitions.
SPACE AND TIME CONTINUOUS PHYSICS SIMULATION FROM PARTIAL OBSERVATIONS Steeven Janny LIRIS, INSA Lyon, France steeven.janny@insa-lyon.fr Madiha Nadri LAGEPP, Univ. Lyon 1, France madiha.nadri-wolf@univ-lyon1.fr Julie Digne LIRIS, CNRS, France julie.digne@cnrs.fr Christian Wolf Naver Labs Europe, France christian.wolf@naverlabs.com ABSTRACT Modern techniques for physical simulations rely on numerical schemes and mesh-refinement methods to address trade-offs between precision and complexity, but these handcrafted solutions are tedious and require high computational power. Data-driven methods based on large-scale machine learning promise high adaptivity by integrating long-range dependencies more directly and efficiently. In this work, we focus on fluid dynamics and address the shortcomings of a large part of the literature, which are based on fixed support for computations and predictions in the form of regular or irregular grids. We propose a novel setup to perform predictions in a continuous spatial and temporal domain while being trained on sparse observations. We formulate the task as a double observation problem and propose a solution with two interlinked dynamical systems defined on, respectively, the sparse positions and the continuous domain, which allows to forecast and interpolate a solution from the initial condition. Our practical implementation involves recurrent GNNs and a spatio-temporal attention observer capable of interpolating the solution at arbitrary locations. Our model not only generalizes to new initial conditions (as standard auto-regressive models do) but also performs evaluation at arbitrary space and time locations. We evaluate on three standard datasets in fluid dynamics and compare to strong baselines, which are outperformed both in classical settings and in the extended new task requiring continuous predictions. 1 INTRODUCTION The Lavoisier conservation principle states that changes in physical quantities in closed regions must be attributed to either input, output, or source terms. By applying this rule at an infinitesimal scale, we retrieve partial differential equations (PDEs) governing the evolution of a large majority of physics scenarios. Consequently, the development of efficient solvers is crucial in various domains involving physical phenomena. While conventional methods (e.g. finite difference or finite volume methods) showed early success in many situations, numerical schemes suffer from high computational complexity, in particular for growing requirements on fidelity and precision. Therefore, there is a need for faster and more versatile simulation tools that are reliable and efficient, and data-driven methods offer a promising opportunity. Large-scale machine learning offers a natural solution to this problem. In this paper, we address data-driven solvers for physics, but with additional requirements on the behavior of the simulator: R1. Data-driven – the underlying physics equation is assumed to be completely unknown. This includes the PDE, but also the boundary conditions. The dynamics must be discovered from a finite dataset of trajectories, i.e. a collection of observed behaviors from the physical system, R2. Generalization – the method must be capable of handling new initial conditions that do not explicitly belong to the training set, without re-training or fine-tuning, R3. **Time and space continuous** – the domain of the predicted solution must be continuous in space and time,\(^1\) so that it can be queried at any arbitrary location within the domain of definition. These requirements are common in the field but rarely addressed altogether. R1 allows for handling complex phenomena where the exact equation might be unknown, and R2 supports the growing need for faster simulators, which consequently must handle new ICs. Space and time continuity (R3) are also useful properties for standard simulations since the solution can be made as fine as needed in certain complex areas. This task requires learning from sparsely distributed observations only, and without any prior knowledge on the PDE form. In these settings, a standard approach consists of approximating the behavior of a discrete solver, enabling forecasting in an auto-regressive fashion \([Pfaff et al., 2020; Janny et al., 2023; Sanchez-Gonzalez et al., 2020]\), losing therefore spatial and temporal continuity. Indeed, auto-regressive models assume strong regularities in the data, such as a static spatial lattice and uniform time steps. For these reasons, generalization to new spatial locations or intermediate time steps is not straightforward. These methods satisfy R1 and R2, but not R3. In another trend, Physics-Informed Neural Networks (PINNs) learn a solution on a continuous domain. They leverage the PDE operator to optimize the weights of a neural network representing the solution, and cannot generalize to new ICs, thus violating R1 and R2. In this paper, we address R1, R2 and R3 altogether in a new setup involving two joint dynamical systems. R1 and R2 are satisfied using an auto-regressive discrete-time dynamics learned from the sparse observations and producing a trajectory in latent space. Then, R3 is achieved with a state observer derived from a second dynamical system in continuous time. This state observer relies on transformer-based cross-attention to enable evaluation at arbitrary spatio-temporal locations. In a nutshell: (a) We propose a new setup to address continuous space and time simulations of physical systems from sparse observation, leveraging insights from control theory. (b) We provide strong theoretical results indicating that our setup is well-suited to address this task compared to existing baselines, which are confirmed experimentally on challenging benchmarks. (c) We provide experimental evidence that our state observer is more powerful than handcrafted interpolations for the targeted task. (d) With experiments on three challenging standard datasets (\(Navier\)-\(Yin et al., 2022; Stokes, 2009; Shallow Water \(Yin et al., 2022; Galewsky et al., 2004; Eagle Janny et al., 2023\)), and against state-of-the-art methods (\(MeshGraphNet\) (MGN) \([Pfaff et al., 2020; DINO \(Yin et al., 2022); MAgNet \(Boussif et al., 2022\))\), we show that our results generalize to a wider class of problems, with excellent performances. ## 2 RELATED WORKS **Autoregressive models** – have been extensively used to replicate the behavior of iterative solvers in discrete time, especially in cases where the PDE is unknown or generalization to new initial conditions is needed. These models come in various internal architectures, including convolution-based models for systems observed on a dense uniform grid \([Stachenfeld et al., 2021; Guen & Thome, 2020; Bézenac et al., 2019]\) and graph neural networks \([Battaglia et al., 2016]\) that can adapt to arbitrary spatial discretizations \([Sanchez-Gonzalez et al., 2020; Janny et al., 2022a; Li et al., 2018]\). Such models have demonstrated a remarkable capacity to produce highly accurate predictions and generalize over long prediction horizons, making them particularly suitable for addressing complex problems such as fluid simulation \([Pfaff et al., 2020; Han et al., 2021; Janny et al., 2023]\). However, auto-regressive models are inherently limited to a fixed and constant spatio-temporal discretization grid, hindering their capability to evaluate the solution anywhere and at any time. Neural ordinary differential equations (Neural ODE \([Chen et al., 2018; Dupont et al., 2019]\)) offer a countermeasure to the fixed timestep constraint by learning continuous ODEs on discrete data using an explicit solver, such as Euler or Runge-Kutta methods. In theory, this enables the solution to be evaluated at any temporal location but in practice still relies on the discretization of the time variable. Moreover, extending this approach to PDEs is not straightforward. Contrarily to these approaches, we leverage the auto-regressive capacity and accuracy while allowing arbitrary evaluation of the solution at any point in both time and space. --- \(^1\)In what follows, while being a misnomer, *space and time continuity* of the solution designate the continuity of the spatial and temporal domain of definition of the solution, and not the continuity of the solution itself. Continuous solutions for PDEs – date back to the early days of deep learning (Dissanayake & Phan-Thien [1994], Lagaris et al. [1998], Psichogios & Ungar [1992]) and have recently experienced a resurgence of interest (Raissi et al. [2019], [2017]). Physics-informed neural networks represent the solution directly as a neural network and train the model to minimize a residual loss derived from the PDE. They are mesh-free, which alleviates the need for complex adaptive mesh refinement techniques (mandatory in finite volume methods), and have been successfully applied to a broad range of physical problems (Lu et al. [2021], Misyris et al. [2020], Zoboli et al. [2022], Kissas et al. [2020], Yang et al. [2019], Cai et al. [2021]), with a growing community proposing architecture designs specifically tailored for PDEs (Sitzmann et al. [2020], Fathony et al. [2021]) as well as new training methods (Zeng et al. [2023], Finzi et al. [2023], de Avila Belbute-Peres & Kolter [2023]). Yet, these models are also known to be difficult to train efficiently (Krishnapriyan et al. [2021], Wang et al. [2022]). Recently, neural operators have attempted to learn a mapping between function space, leveraging kernels in Fourier space (Li et al. [2020b] (FNO) or graphs (Li et al. [2020a] (GNO) to learn the correspondence from the initial condition to the solution at a fixed horizon. While some operator learning frameworks can theoretically generalize to unseen initial conditions and arbitrary locations, we must consider the practical limitations of existing baselines. For instance, FNO requires a static cartesian grid and cannot be directly evaluated outside the training grid. Similarly, GNO can handle arbitrary meshes in theory, but still has limitations in evaluating points outside the training grid and Li et al. [2021] variant can only be queried at fixed time increments. DeepONet (Lu et al. [2019]) can handle free sampling in time and space but is also constrained to a static observation grid. Continuous and generalizable solvers – represent a significant challenge. Few models satisfy all these conditions. MP-PDE (Brandstetter et al. [2022]) can handle free-form grids but cannot generalize to different resolutions between train and test, and performs auto-regressive temporal forecasting. Closer to our work, MAgNet (Boussif et al. [2022]) proposes to interpolate the observation graph in latent space to new query points before forecasting the solution using graph neural networks. However, they assume prior knowledge of the evaluation mesh and the new query points, use nearest neighbor interpolation instead of trained attention and struggle to generalize to finer grids during test time. In Hua et al. [2022], the auto-regressive MeshGraphNet (Pfaff et al. [2020]) is combined with Orthogonal Spline Collocation to allow for arbitrary spatial queries. Finally, DINo (Yin et al. [2022]) proposes a mesh-free, space-time continuous model to address PDE solving. The model uses context adaptation techniques to dynamically adapt the output of an implicit neural representation forward in time. DINo assumes the existence of a latent ODE modeling the temporal evolution of the context vector and learns it as a Neural ODE. In contrast, our method differs from DINo as our model is based on physics forecasting in an auto-regressive manner. We achieve space and time continuity through a learned dynamical attention transformer capable of handling arbitrary locations and points in time. Our design choices allow for generalization on new spatial and temporal locations, i.e., not limited to discrete time steps, and new initial conditions while being trainable from sparse observations. 3 Continuous Solutions from Sparse Observations Consider a dynamical system following a Partial Differential Equation (PDE) defined for all \((x, t) \in \Omega \times [0, T]\), with \(T\) a positive constant: \[ \begin{align*} \dot{s}(x, t) &= f(s(x, t)) \quad \forall (x, t) \in \Omega \times [0, T], \\ s(x, 0) &= s_0(x) \quad \forall x \in \Omega, \quad s(x, t) = \bar{s}(x, t) \quad \forall (x, t) \in \partial \Omega \times [0, T] \end{align*} \] where the state lies in an invariant set \(s \in S\), \(f : S \mapsto S\) is an unknown operator, \(s_0 : \Omega \mapsto \mathbb{R}^n\) is the initial condition (IC) and \(\bar{s} : \partial \Omega \times [0, T] \mapsto \mathbb{R}^n\) the boundary condition. In what follows, we consider trajectories with shared boundary conditions, hence we omit \(\bar{s}\) from the notation for readability. In practice, the operator \(f\) is unknown, and we assume access to a set \(D\) of \(K\) discrete trajectories from different ICs, \(s_k^0\), sampled at sparse and scattered locations in time and space. Formally, we introduce two finite sets \(X \subset \Omega\) of fixed positions and fixed regularly sampled times \(T\) at sampling rate \(\Delta^*\). Let \(S(s_0, x, t)\) be the solution of this PDE from IC \(s_0\), the dataset \(D\) is given as: \[D := \left\{ S(s_k^0, X, T) \mid k \in [1, K] \right\}\] Our task is formulated as: Given \(D\), a new initial condition \(s_0 \in S\), and a query \((x, t) \in \Omega \times [0, T]\), find the solution of equation \[1\] at the queried location and from the given IC, that is \(S(s_0, x, t)\). --- 2 Code will be made public. Project page: https://continuous-pde.github.io/ Figure 1: **Model overview** – We achieve space and time continuous simulations of physics systems by formulating the task as a double observation problem. **System 1** is a discrete dynamical model used to compute a sequence of latent anchor states $z_d$ auto-regressively, and **System 2** is used to design a state estimator $\psi_q$, retrieving the dense physical state at arbitrary locations $(x, t)$. Note that this task involves generalization to new ICs, as well as estimation to unseen spatial locations within $\Omega$ and unseen time instants within $[0, T]$. We do not explicitly require extrapolation to instants $t > T$, although it comes as a side benefit of our approach up to some extent. ### 3.1 THE DOUBLE OBSERVATION PROBLEM The task implies extracting regularities from weakly informative physical variables that are sparsely measured in space and time, since $\mathcal{X}$ and $\mathcal{T}$ contain very few elements. Consequently, the possibility to forecast their trajectories from off-the-shelf auto-regressive methods is very unlikely (as confirmed experimentally). To tackle this challenge, we propose an approach accounting for the fact that the phenomenon is not directly observable from the sparse trajectories, but can be deduced from a richer latent state-space in which the dynamics is markovian. We introduce two linked dynamical models lifting sparse observations to dense trajectories guided by observability considerations, namely \[ \begin{align*} \text{System 1:} & \quad z_d[n+1] = f_1(z_d[n]) \\ & \quad s_d[n] = h_1(z_d[n]) \\ \text{System 2:} & \quad \dot{s}(x, t) = f_2(s, x, t) \\ & \quad z(x, t) = h_2(s, x, t) \quad \forall (x, t) \in \Omega \times [0, T] \end{align*} \] where for all $n \in \mathbb{N}$, we note $s_d[n] = s(\mathcal{X}, n\Delta)$ the sparse observation at some instant $n\Delta$ (the sampling rate $\Delta$ is not necessarily equal to the sampling rate $\Delta^*$ used for data acquisition, which we will exploit during training to improve generalization. This will be detailed later). **System 1** – is a discrete-time dynamical system where the available measurements $s_d[n]$ are considered as partial observations of a latent state variable $z_d[n]$. We aim to derive an output predictor from System 1 to forecast trajectories of sparse observations auto-regressively from the sparse IC. As mentioned earlier, sparse observations are unlikely to be sufficient to perform predictions, hence we introduce a richer latent state variable $z_d$ in which the dynamics is truly markovian, and observations $s_d[n]$ are seen as measurements of the state $z_d$ using the function $h_1$. **System 2** – is a continuous-time dynamical system describing the evolution of the to-be-predicted dense trajectory $S(s_0, x, t)$. It introduces continuous observations $z(x, t)$ such that $z(\mathcal{X}, n\Delta) = z_d[n]$. The insight is that the state representation $z_d[n]$ obtained from System 1 is designed to contain sufficient information to predict $s_d[n]$, but not necessarily to predict the dense state. Formally, $z_d$ represents solely the observable part of the state, in the sense of control theory. At inference time, we forecast at query location $(x, t)$ with a 2-step algorithm: (**Step-1**) System 1 is used as an output predictor from the sparse IC $s_d[0]$, and computes a sequence $z[0], z[1], ...$, which we refer to as “anchor states”. This sequence allows the dynamics to be Markovian, provides sufficient information for the second state estimation step and holds information to predict the sparse observations, allowing supervision during training. (**Step-2**) We derive a state observer from System 2 leveraging the anchor states over the whole time domain to estimate the dense solution at an arbitrary location in space and time (see figure 1). Importantly, for a given IC, the anchor states are computed only once and reused within System 2 to estimate the solution at different points. 3.2 Theoretical analysis In this section, we introduce theoretical results supporting the use of Systems 1 and 2. In particular, we show that using System 1 to forecast the sparse observations in latent space \( z_d \) rather than directly operating in the physical space leads to smaller upper bounds on the prediction error. Then, we show the existence of a state estimator from System 2 and compute an upper bound on the estimation error depending on the length of the sequence of anchor states. **Step 1** – consists of computing the sequence of anchor states guided by an output prediction task of the sparse observations. As classically done, we introduce an encoder (formally, a state observer) \( e(s_d[0]) = z_d[0] \) coupled to System 1 to project the sparse IC into a latent space \( z_d \). Following System 1, we compute the anchor states \( z_d \) auto-regressively (with \( f_1 \)) in the latent space. The sparse observations are extracted from \( z_d \) using \( h_1 \). In comparison, existing baselines (Pfaff et al., 2020; Sanchez-Gonzalez et al., 2020; Stachenfeld et al., 2021) maintain the state in the physical space and discard the intermediate latent representation between iterations. Formally, let us consider approximations \( \hat{f}_1, \hat{h}_1, \hat{e} \) (in practice realized as deep networks trained from data \( D \)) of \( f_1, h_1 \) and \( e \) and compare the prediction algorithm for the classic auto-regressive (AR) approach and ours: \[ \text{Classic AR: } \hat{s}_{d\text{ar}}[n] := (\hat{h}_1 \circ \hat{f}_1 \circ \hat{e})^n(s_d[0]) \\ \text{Ours: } \hat{s}_d[n] := \hat{h}_1 \circ \hat{f}_1^n \circ \hat{e}(s_d[0]) \] Classical AR approaches re-project the latent state into the physical space at each step and repeat “encode-process-decode”. Our method encodes the sparse IC, advances the system in the latent space, and decodes toward the physical space at the end. A similar approach has also been explored in Wu et al. (2022); Kochkov et al. (2020), albeit in different contexts, without theoretical analysis. **Proposition 1** Consider a dynamical system of the form of System 1 and assume the existence of a state observer \( e \) along with approximations \( \hat{f}_1, \hat{h}_1, \hat{e} \) with Lipschitz constants \( L_f, L_h \) and \( L_e \) respectively such that \( L_h L_f L_e \neq 1 \). If there exist \( \delta_f, \delta_h, \delta_e \in \mathbb{R}^+ \) such that \( \forall (z, s) \in \mathbb{R}^{n_z} \times \mathbb{R}^{n_s} \) \[ |f_1(z) - \hat{f}_1(z)| \leq \delta_f, \quad |h_1(z) - \hat{h}_1(z)| \leq \delta_h, \quad |e(s) - \hat{e}(s)| \leq \delta_e \] for the Euclidean norm \( |\cdot| \), then for all integer \( n > 0 \), with \( \hat{s}_d[n] \) and \( \hat{s}_{d\text{ar}}[n] \) as in equation 3, \[ |\hat{s}_d[n] - \hat{s}_{d\text{ar}}[n]| \leq \delta_h + L_h \left( \frac{L_f^n - 1}{L_f - 1} + \frac{L_f^n \delta_e}{L_f - 1} \right) \] \[ |\hat{s}_d[n] - \hat{s}_{d\text{ar}}[n]| \leq \delta \frac{L^n - 1}{L - 1} \] with \( \delta = \delta_h + L_h \delta_f + L_h L_f \delta_e \) and \( L = L_h L_f L_e \). **Proof**: See appendix B. This result shows that falling back to the physical space at each time step degrades the upper bound of the prediction error. Indeed, if \( L < 1 \), the upper bound converges trivially to zero when \( n \) increases, and hence can be ignored. Otherwise, the upper bound for the classic AR scheme appears to be more sensitive to approximation errors \( \delta_h, \delta_f \) and \( \delta_e \) compared to our approach (for a formal comparison, see appendix C). Intuitively it means that information is lost in the observation space, which thus needs to be re-estimated at each iteration when using the classic AR scheme. By maintaining a state variable in the latent space, we allow this information to flow readily between each step of the simulator (see blue frame in figure 1). **Step 2** – The state estimator builds upon System 2 and relies on the set of anchor states from the previous step to estimate the dense physical state at arbitrary locations in space and time. Formally, we look for a function \( \psi_q \) leveraging the sequence of anchor states \( z_d[0], \ldots, z_d[q] \) (simulated from the sparse IC \( s_d[0] \)) to retrieve the dense solution \( x \). In what follows, we show that (1) such a function \( \psi_q \) exists and (2) we compute an upper bound on the estimation error depending on the length of the sequence. To do so, consider the functional which outputs the anchor states from any IC \( s_0 \in S \) \[ O_p(s_0) = \begin{bmatrix} h_2(s_0(X)) & h_2(S(s_0, X, \Delta)) & \cdots & h_2(S(s_0, X, p\Delta)) \end{bmatrix} = \begin{bmatrix} z_d[0] & \cdots & z_d[p] \end{bmatrix} \] In practice, the ground truths \( z_d[n] \) are not perfectly known, as they are obtained from a data-driven output predictor (step 1) using the sparse IC. Inspired from Janny et al. (2022b), we state: Since the simulation is conducted up to \( T \), and considering the time step \( \Delta \), in practice \( q \leq \lfloor \frac{T}{\Delta} \rfloor \). Proposition 2 Consider a dynamical system defined by System 2 and equation (7). Assume that A1. \( f_2 \) is Lipschitz with constant \( L_s \), A2. there exists \( p > 0 \) and a strictly increasing function \( \alpha \) such that \( \forall s_a, s_b \in S^2 \) and \( \forall q \geq p \) \[ |O_q(s_a) - O_q(s_b)| \geq \alpha(q)|s_a - s_b|_S \] where \( |\cdot|_S \) is an appropriate norm for \( S \). Then, \( \forall q \geq p \), there exists \( \psi_q \) such that, for \( (x, t) \in \Omega \times [0, T] \) and \( \delta_n \) such that \( \hat{z}_d[n] = z_d[n] + \delta_n \), for all \( n \leq q \), \[ \psi_q(z_d[0], \ldots, z_d[q], x, t) = S(s_0, x, t) \] (9) \[ |S(s_0, x, t) - \psi_q(\hat{z}_d[0], \ldots, z_d[q], x, t)|_S \leq 2\alpha(q)^{-1}|\delta_0|_q e^{L_s t}. \] (10) where \( \delta_0[q] = [\delta_0 \cdots \delta_q] \). Proof: See appendix D. Assumption A2 states that the longer we observe two trajectories from different ICs, the easier it will be to distinguish them, ruling out systems collapsing to the same state. Such systems are uncommon since forecasting their trajectory becomes trivial after some time. This assumption is related to finite-horizon observability in control theory, a property of dynamical systems guaranteeing that the (markovian) state can be retrieved given a finite number \( p \) of past observations. Equation (8) is associated with injectivity of \( O_q \), hence the existence of a left inverse mapping the sequence of anchor states to the IC \( s_0 \). Proposition 2 highlights a trade-off on the performance of \( \psi_q \). On one hand, longer sequences of anchor states are harder to predict, leading to a larger \( |\delta_0|_q \), which impacts the state estimator \( \psi_q \) negatively. On the other hand, longer sequences hold more information that can still be leveraged by \( \psi_q \) to improve its estimation, represented by \( \alpha(q)^{-1} \) in equation (10). In contrast to competing baselines or conventional interpolation algorithms, our approach takes this trade-off into account, by explicitly leveraging the sequence to estimate the dense solution, as will be discussed below. Discussion and related work – the competing baselines can be analyzed using our setup, yet in a weaker configuration. For instance, one can see Step 2 as an interpolation process, and replace it with a conventional interpolation algorithm, which typically relies on spatial neighbors only. Our method not only exploits spatial neighborhoods but also leverages temporal data, improving the performance, as shown in proposition 2 and empirically corroborated in Section 4. MAgNet (Boussif et al., 2022) uses a reversed interpolate-forecast scheme compared to ours. The IC \( s_d[0] \) is interpolated right from the start to estimate \( s_0 \) (corresponding to our Step 2, with \( q=1 \)), and then simulated with an auto-regressive model in the physical space (with the classic AR scheme). Propositions 1 and 2 show that the upper bounds on the estimation and prediction error are higher than ours. Moreover, if the number of query points exceeds the number of known points (\(|\Omega| \gg |X|\)), the input of the auto-regressive solver is filled with noisy interpolations, which impacts performance. DINO (Yin et al., 2022) is a very different approach leveraging a spatial implicit neural representation modulated by a context vector, whose dynamics is modeled via a learned ODE. This approach is radically different than ours and arguably involves stronger hypotheses, such as the existence of a learnable ODE modeling the dynamics of a suitable weight modulation vector. In contrast, our method relies on arguably more sound assumptions, i.e. the existence of an observable discrete dynamics explaining the sparse observation, and the finite-time observability of System 2. 3.3 IMPLEMENTATION The implementation follows the algorithm described in the previous section: (Step-1) rolls out predictions of anchor states from the IC. (Step-2) estimates the state at the query position from these anchor states. The encoder \( \hat{e} \) from Step 1 is a multi-layer perceptron (MLP) which takes as input the sparse IC \( s_d[0] \) and the positions \( X \) and outputs a latent state variable \( z_d[0] \) structured as a graph, with edges computed with a Delaunay triangulation. Hence, each anchor is a graph \( z_d[n] = \{z_d[n]_i\} \), but we will omit index \( i \) over graph nodes in what follows if not required for understanding. We model \( \hat{f}_1 \) as a multi-layer Graph Neural Network (GNN) (Battaglia et al., 2016). The anchor states \( z_d[n] \) are defined at fixed time steps \( n\Delta \), which might not match \( \Delta^* \) used in the data \( T \). We found it beneficial to choose $\Delta = k \times \Delta^*$ with $k > 1 \in \mathbb{N}$ such that the model can be queried during training on time points $t \in T$ that do not match exactly with every time-steps in $z_d[0], z_d[1], \ldots$, but rather on a subset of them, hence encouraging generalization to unseen time. The observation function $\hat{h}_1$ is an MLP applied on the vector at node level in the graph $z_d$. The state estimator $\psi_q$ is decomposed into a Transformer model (Vaswani et al., 2017) coupled to a recurrent neural network to provide an estimate at query spatio-temporal query position $(x, t)$. First, through cross-attention we translate the set of anchor states $z_d[n]$ (one embedding per graph node $i$ and per instant $n$) into a set of estimates of the continuous variable $z(x, t)$ conditioned at the instant $n\Delta$, which we denote $z_{n\Delta}(x, t)$ (one embedding per instant $n$). Following advances in geometric mappings in computer vision (Saha et al., 2022), we use multi-head cross-attention to query from coordinates $(x, t)$ to keys corresponding to the nodes $i$ in each graph anchor state $z_d[n]$, $\forall n$: $$z_{n\Delta}(x, t) = f_{\text{mha}}(Q=\zeta_\omega(x, t), K=V=\{z_d[n]\}_i + \zeta_\omega(\mathcal{X}, n\Delta)), // \text{attention over nodes } i$$ where $Q, K, V$ are, respectively, Query, Key and Value inputs to the cross-attention layer $f_{\text{mha}}$ (Vaswani et al., 2017) and $\zeta_\omega$ a Fourier positional encoding with a learned frequency parameter $\omega$. Finally, we leverage a state observer to estimate the dense solution at the query point from the sequence of conditioned anchor variables, over time. This is achieved with a Gated Recurrent Unit (GRU) (Cho et al., 2014) maintaining a hidden state $u[n]$, $$u[n] = r_{\text{gru}}(u[n-1], z_{n\Delta}(x, t)), \quad \tilde{S}(s_0, x, t) = D(u[q]),$$ which shares similarities with conventional state-observer designs in control theory (Bernard et al., 2022). Finally, an MLP $D$ maps the final GRU hidden state to the desired output, that is, the value of the solution at the desired spatio-temporal coordinate $(x, t)$. See appendix E for details. ### 3.4 Training Generalization to new input locations during training is promoted by creating artificial generalization situations using sub-sampling techniques of the sparse sets $\mathcal{X}$ and $T$. **Artificial generalization** – The anchor states $z_d[n]$ are computed at time rate $\Delta$ larger than the available rate $\Delta^*$. This creates situations during training where the state estimator $\psi_q$ does not have access to a latent state perfectly matching with the queried time. We propose a similar trick to promote spatial generalization. At each iteration, we sub-sample the (already sparse) IC $s_d[0]$ randomly to obtain $\tilde{s}_d[0]$ defined on a subset of $\mathcal{X}$. We then compute the anchor states $\tilde{z}_d$ using System 1. On the other hand, the query points are selected in the larger set $\mathcal{X}$. Consequently, System 2 is exposed to positions that do not always match with the ones in $z_d[n]$. Note that the complete domain of definition $\Omega \times [0, T]$ remains unseen during training. **Training objective** – To reduce training time, we randomly sample $M$ query points $(x_m, \tau_m)$ in $\mathcal{X} \times T$ at each iteration, with a probability proportional to the previous error of the model at this point since its last selection (see appendix E) and we minimize the loss $$L = \sum_{k=1}^{K} \sum_{m=1}^{M} \left| S(s_k^\xi, x_m, \tau_m) - \psi_q(\tilde{z}_d[0][q], x, \tau_m) \right|^2 + \sum_{n=0}^{\lfloor T/\Delta \rfloor} \left| \tilde{s}_d[n] - \hat{h}_1(\tilde{z}_d[n]) \right|^2,$$ with $\tilde{z}_d[n] = \hat{f}_1^n \circ \hat{e}(\tilde{s}_d[0])$. $L_{\text{continuous}}$ supervises the model end-to-end, and $L_{\text{dynamics}}$ trains the latent anchor states $z_d$ to predict the sparse observations from the IC. ### 4 Experimental Results **Experimental setup** – $\mathcal{X} \times T$ results from sub-sampling $\Omega \times [0, T]$ with different rates to control the difficulty of the task. We evaluate on three highly challenging datasets (details in appendix F): - **Navier** (Yin et al., 2022) simulates the vorticity of a viscous, incompressible flow driven by a sinusoidal force acting on a square domain with periodic boundary conditions. - **Shallow Water** (Yin et al., 2022; Galewsky et al., 2004) studies the velocity of shallow waters evolving on the tangent surface of a 3D sphere. - **Eagle** (Janny et al., 2023) is a challenging dataset of turbulent airflow generated by a moving drone in a 2D environment with many different scene geometries. We evaluate our model against three baselines representing the state-of-the-art in continuous simulations. **Interpolated MeshGraphNet (MGN)** (Pflaß et al., 2020) is a standard multi-layered GNN. used auto-regressively and extended to spatiotemporal continuity using physics-agnostic interpolation. **MaGNet** (Boussif et al., 2022) interpolates the IC at the query position in latent space before using MGN. The original implementation assumes knowledge of the target graph during training, including new queries. When used for superresolution, the authors kept the ratio between the amount of new query points and available points constant. Hence, while MaGNet is queried at unseen locations, it also benefits from more information. In our setup, the model is exposed to a fixed number of points but does not receive more samples during evaluation. This makes our problem more challenging than the one addressed in Boussif et al. (2022). **DINO** (Yin et al., 2022) models the solution as an Implicit Neural Representation (INR) $s(x, \alpha_t)$ where the spatial coordinates $x$ are fed to a MFN (Fathony et al., 2021) and $\alpha_t$ is a context vector modulating the weights of the INR. The dynamics of $\alpha$ is modeled with a Neural-ODE, where the dynamics is an MLP. We share common objectives with DINO and take inspiration from their evaluation tasks yet in a more challenging setup. Details of the baselines are in appendix F. We highlight a caveat on MaGNet: the model can handle a limited amount of new queries, roughly equal to the number of observed points. Our task requires the solution at up to 20 times more queries than available points. In this situation, the graph in MaGNet is dominated by noisy states from interpolation, and the auto-regressive forecaster performs poorly. During evaluation, we found it beneficial to split the queries into chunks of 10 nodes and to apply the model several times. This strongly improves the performance at the cost of an increased runtime. ### Space Continuity Table 1 compares the spatial interpolation power of our method versus several baselines. The MSE values computed on the training domain (In-$\mathcal{X} = \mathcal{X}$) and outside (Ext-$\mathcal{X} = \Omega \setminus \mathcal{X}$) show that our method offers the best performance, especially for the Ext-domain task, which is our aim. To ablate dynamics and evaluate the impact of trained interpolations, we also report the predictions of a **Time Oracle** which uses sparse ground truth values at all time steps and interpolates (bicubic) spatially. This allows us to assess whether the method is doing better than a simple axiomatic interpolation. While MGN offers competitive in-domain predictions, the cubic interpolation fails to extrapolate reliably on unseen points. This can be seen in the In/Ext gap for Interpolated MGN which is very close to the Time Oracle error. MaGNet, which builds on a similar framework, is hindered by the larger amount of unobserved data in the input mesh. At test time, the same number of initial condition points are provided but the method interpolates substantially more points. DINO achieves a very low In/Ext gap, yet fails on highly (5%) down-sampled tasks. One of the key differences with DINO is that the dynamics relies on an internal ODE for the temporal evolution of a modulation vector. In contrast, our model uses an explicit auto-regressive backbone, and time forecasting is handled in an arguably more meaningful space, which we conjecture to be the reason why we achieve better results (see fig. 5 in the appendix). ### Time Continuity is a step forward in difficulty, as the model needs to interpolate not only to unseen spatial locations (datasets are undersampled at 25%) but also on intermediate timesteps (Ext-$\mathcal{T}$, Table 2). All models perform well on **Shallow Water**, which is relatively easy. Both DINO and MaGNet leverage a discrete integration scheme (Euler for MaGNet and RK4 for DINO) allowing querying the model between timesteps seen at training. These schemes struggle to capture the data dependencies effectively and therefore the methods fail on **Navier** (see also Figure 6 for qualitative | | Navier | Shallow Water | Eagle | |------------------|--------|---------------|-------| | | High | Mid | Low | High | Mid | Low | High | Low | | DINo (Yin et al., 2022) | | | | | | | | | | In-$\mathcal{X}$ | 1.557 | 1.130 | 1.878 | 0.1750 | 0.1814 | 0.2733| 287.3 | 302.7 | | Ext-$\mathcal{X}$ | 1.600 | 1.253 | 5.493 | 4.638 | 13.40 | 21.55 | 381.7 | 489.6 | | Interp. MGN (Pflaß et al., 2020) | | | | | | | | | | In-$\mathcal{X}$ | 1.913 | 0.9969 | 0.6012| 0.3663 | 0.2835 | 0.7309| 64.44 | 83.58 | | Ext-$\mathcal{X}$ | 2.694 | 4.784 | 14.80 | 1.744 | 4.221 | 8.187 | 173.4 | 241.5 | | Time Oracle (n.c.) | | n/a | | | n/a | | n/a | | | In-$\mathcal{X}$ | 0.851 | 4.204 | 15.63 | 1.617 | 4.327 | 8.522 | 147.0 | 221.2 | | Ext-$\mathcal{X}$ | | | | | | | | | | MaGNet (Boussif et al., 2022) | | | | | | | | | | In-$\mathcal{X}$ | 18.17 | 6.047 | 8.679 | 0.3196 | 0.3358 | 0.4292| 99.79 | 124.5 | | Ext-$\mathcal{X}$ | 35.73 | 26.24 | 57.21 | 10.21 | 23.20 | 30.55 | 194.3 | 260.7 | | Ours | | | | | | | | | | In-$\mathcal{X}$ | 0.1989 | 0.2136 | 0.2446| 0.2940 | 0.3139 | 0.2700| 70.02 | 78.83 | | Ext-$\mathcal{X}$ | 0.2029 | 0.2463 | 0.5601| 0.4493 | 1.051 | 2.800 | 90.88 | 117.2 | Table 1: **Space Continuity** – we evaluate the spatial interpolation power of our method vs. the baselines and standard interpolation techniques. We vary the number of available measurement points in the data for training from High (25% of simulation grid), Middle (10%), and Low (5%) amount of points and show that our model outperforms the baselines. Evaluation is conducted over 20 frames in the future (10 for *Eagle*) and we report the MSE to the ground truth solution ($\times 10^{-3}$). Figure 2: **Results on Eagle** – Per point error of the flow prediction on an *Eagle* example in the *Low* spatial down-sampling scenario. Our model exhibits lower errors as also shown in Tables [1] and [2]. | | Navier | Shallow Water | Eagle | |------------------|--------|---------------|-------| | | 1/1 | 1/2 | 1/4 | | DINo | In-$\mathcal{T}$ | 1.590 | 36.31 | 46.02 | | (Yin et al., 2022) | Ext-$\mathcal{T}$ | n/a | 39.42 | 54.72 | | Interp. MGN | In-$\mathcal{T}$ | 2.506 | 4.834 | 12.77 | | (Pfaff et al., 2020) | Ext-$\mathcal{T}$ | n/a | 5.922 | 36.43 | | Spatial Oracle (n,c) | In-$\mathcal{T}$ | n/a | n/a | n/a | | | Ext-$\mathcal{T}$ | n/a | 1.296 | 28.58 | | MAGNet | In-$\mathcal{T}$ | 31.51 | 135.0 | 243.9 | | (Boussif et al., 2022) | Ext-$\mathcal{T}$ | n/a | 142.8 | 255.5 | | Ours | In-$\mathcal{T}$ | 0.2019 | 0.1964 | 0.4062 | | | Ext-$\mathcal{T}$ | n/a | 0.2138 | 11.36 | Table 2: **Time Continuity** – we evaluate the time interpolation power of our method vs. the baselines. Models are trained and evaluated with 25% of $\Omega$, and with different temporal resolutions (full, half, and quarter of the original). The Spatial Oracle (*not comparable!* uses the exact solution at every point in space, and performs temporal interpolation. Evaluation is conducted over 20 frames in the future (10 for *Eagle*) and we report MSE compared to the ground truth solution ($\times 10^{-3}$). *Eagle* is particularly challenging, the main source of error being the spatial interpolation, as can be seen in Figure 2 – our method yields lower errors in flow estimation. Many more experiments – are available in appendix G. We study the impact of key design choices, artificial generalization, and dynamical loss. We show qualitative results on time interpolation, time extrapolation on the *Navier* dataset. We explore generalization to different grids. We provide more empirical evidence of the soundness of Step 2 in an ablation study (including comparison with attentive neural process [Kim et al., 2018], an attention-based structure somehow close to ours), and observe attention maps on several examples. We show that our state estimator goes beyond local interpolation, as conventional interpolation algorithms would do. Finally, we also measure the computational burden of the discussed methods and show that our approach is more efficient. 5 CONCLUSION We exploit a double dynamical system formulation for simulating physical phenomena at arbitrary locations in time and space. Our approach comes with theoretical guarantees on existence and accuracy without knowledge of the underlying PDE. Furthermore, our method generalizes to unseen initial conditions and reaches excellent performances outperforming existing methods. Potential applications of our model goes beyond fluid dynamics and can be applied to various PDE-based problem. Yet, our approach relies on several hypotheses such as regular time sampling and observability. Finally, for known and well-studied phenomena, it would be interesting to add physics priors in the system, a nontrivial extension that we leave for future work. Reproducibility – the detailed model architecture is described in the appendix. For the sake of reproducibility, in the case of acceptance, we will provide the source code for training and evaluating our model, as well as trained model weights. For training, we will provide instructions for setting up the codebase, including installing external dependencies, pre-trained models, and pre-selected hyperparameter configuration. For the evaluation, the code will include evaluation metrics directly comparable to the paper’s results. Ethics statement – While our simulation tool is unlikely to yield unethical results, we are mindful of potential negative applications of improving fluid dynamics simulations, particularly in military contexts. Additionally, we strive to minimizing the carbon footprint associated with our training processes. 6 ACKNOWLEDGEMENTS We recognize support through French grants “Delicio” (ANR-19-CE23-0006) of call CE23 “Intelligence Artificielle” and “Remember” (ANR-20-CHIA0018), of call “Chaires IA hors centres”. This work was performed using HPC resources from GENCI-IDRIS (Grant 2023-AD010614014). REFERENCES Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. *Neural Information Processing Systems*, 2016. Pauline Bernard, Vincent Andrieu, and Daniele Astolfi. Observer design for continuous-time dynamical systems. *Annual Reviews in Control*, 2022. Emmanuel De Bézenac, Arthur Pajot, and Patrick Gallinari. Deep learning for physical processes: Incorporating prior scientific knowledge. *Journal of Statistical Mechanics: Theory and Experiment*, 2019. Oussama Boussif, Yoshua Bengio, Loubna Benabbou, and Dan Assouline. Magnet: Mesh agnostic neural pde solver. In *Neural Information Processing Systems*, 2022. Johannes Brandstetter, Daniel E. Worrall, and Max Welling. Message passing neural PDE solvers. In *International Conference on Learning Representations*, 2022. Shengze Cai, Zhiping Mao, Zhicheng Wang, Minglang Yin, and George Em Karniadakis. Physics-informed neural networks (pinns) for fluid mechanics: A review. *Acta Mechanica Sinica*, 2021. Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. *Neural Information Processing Systems*, 2018. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. *arXiv preprint*, 2014. Filipe de Avila Belbute-Peres and J Zico Kolter. Simple initialization and parametrization of sinusoidal networks via their kernel bandwidth. In *International Conference on Learning Representations*, 2023. MWMG Dissanayake and Nhan Phan-Thien. Neural-network-based approximations for solving partial differential equations. *Communications in Numerical Methods in Engineering*, 1994. Emilien Dupont, Arnaud Doucet, and Yee Whye Teh. Augmented neural odes. *Neural Information Processing Systems*, 2019. Rizal Fathony, Anit Kumar Sahu, Devin Willmott, and J Zico Kolter. Multiplicative filter networks. In *International Conference on Learning Representations*, 2021. Marc Anton Finzi, Andres Potapczynski, Matthew Choptuik, and Andrew Gordon Wilson. A stable and scalable method for solving initial value pdes with neural networks. In *International Conference on Learning Representations*, 2023.
tYsDoVj1At
It raises the question of how sensitive the results might be to the SGD hyperparameters, such as the number of iterations and the learning rate. Are there significant variations in the search outcomes based on these parameters? Additionally, the need for SGD-based network initialization raises the question of whether the method still depends on an input pre-trained network, and if so, how this aligns with the paper's objectives.
AUTOMATED SEARCH-SPACE GENERATION FOR SUB-NETWORK SEARCH WITHIN DEEP NEURAL NETWORKS Anonymous authors Paper under double-blind review ABSTRACT To search an optimal sub-network within a general deep neural network (DNN), existing neural architecture search (NAS) methods typically rely on handcrafting a search space beforehand. Such requirements make it challenging to extend them onto general scenarios without significant human expertise and manual intervention. To overcome the limitations, we propose Automated Search-Space Generation for Sub-Network Search with DNNs (ASGSSD), perhaps the first automated system to train general DNNs that cover all candidate connections and operations and produce high-performing sub-networks in the one shot manner. Technologically, ASGSSD delivers three noticeable contributions to minimize human efforts: (i) automated search space generation for general DNNs; (ii) a Hierarchical Half-Space Projected Gradient (H2SPG) that leverages the hierarchy and dependency within generated search space to ensure the network validity during optimization, and reliably produces a solution with both high performance and hierarchical group sparsity; and (iii) automated sub-network construction upon the H2SPG solution. Numerically, we demonstrate the effectiveness of ASGSSD on a variety of general DNNs, including RegNet, StackedUnets, SuperResNet, and DARTS, over benchmark datasets such as CIFAR10, Fashion-MNIST, ImageNet, STL-10, and SVHN. The sub-networks computed by ASGSSD achieve competitive even superior performance compared to the starting full DNNs and state-of-the-arts. 1 INTRODUCTION Deep neural networks (DNNs) have achieved remarkable success in various fields, which success is highly dependent on their sophisticated underlying architectures (LeCun et al., 2015; Goodfellow et al., 2016). To design effective DNN architectures, human expertise have handcrafted numerous popular DNNs such as ResNet (He et al., 2016) and transformer (Vaswani et al., 2017). However, such human efforts may not be scalable enough to meet the increasing demands for customizing DNNs for diverse tasks. To address this issue, Neural Architecture Search (NAS) has emerged to automate the network creations and reduce the need for human expertise (Elsken et al., 2018). In the realm of NAS studies, discovering the optimal sub-network within a general DNN that covers all candidate connections and operations stands as a pivotal topic. Gradient-based methods (Liu et al., 2018; Yang et al., 2020; Xu et al., 2019; Chen et al., 2021c) are perhaps the most popular for the discovery because of their efficiency. Such methods parameterize operation candidates via introducing auxiliary architecture variables with weight sharing, then search a (sub)optimal sub-network via formulating and solving a multi-level optimization problem. Despite the advancements in gradient-based NAS methods, their usage is still limited due to certain inconvenience. In particular, their automation relies on manually determining the search space for a pre-specified DNN beforehand, and requires the manual introduction of auxiliary architecture variables onto the prescribed search space. To extend these methods onto other DNNs, the end-users still need to manually construct the search pool, then incorporate the auxiliary architecture variables along with building the whole complicated multi-level optimization training pipeline. The whole process necessitates significant domain-knowledge and engineering efforts, thereby being inconvenient and time-consuming for users. Therefore, it is natural to ask whether we could reach an Objective. Given a general DNN, automatically generate its search space, train it once, and construct a sub-network that achieves a dramatically compact architecture and high performance. Figure 1: Overview of ASGSSD. Given a general DNN, ASGSSD first automatically generates a search space, then employs H2SPG to identify redundant removal structures and train the important counterparts to high-performance, finally constructs a compact and high-performing sub-network. Achieving the objective is severely challenging in terms of both engineering developments and algorithmic designs, consequently not achieved yet by the existing works to the best of our knowledge. We now build automated search-space generation for sub-network search within deep neural networks (ASGSSD) that first reaches the objective. Given a DNN that covers all operation and connection candidates, ASGSSD automatically generates a search space, trains and identifies redundant structures, then builds a sub-network that achieves both high performance and compactness, as shown in Figure 1. The whole procedure can be automatically proceeded, dramatically reduce the human efforts, and fit general DNNs and applications. Our main contributions can be summarized as follows. • **Automated Search Space Generation and Sub-Network Construction.** We propose a novel graph algorithm to automatically exploit the architecture given a general DNN, then analyze the hierarchy and dependency across different operators to form a search space. The established search space consists of the structures that could be removed without interrupting the functionality of the remaining DNN. We further propose a novel graph algorithm to automatically construct a sub-network upon the starting DNN parameterized as the subsequent H2SPG solution. • **Hierarchical Half-Space Projected Gradient (H2SPG).** We propose a novel H2SPG, perhaps the first optimizer, that solves a hierarchical structured sparsity problem for general DNN applications. H2SPG computes a solution of both high performance and desired sparsity level. Compared to other sparse optimizers, H2SPG conducts a dedicated hierarchical search phase over the generated search space to ensures the validity of the constructed sub-network. • **Experimental Results.** We demonstrate the effectiveness of ASGSSD on extensive DNNs including RegNet, StackedUnets, SuperResNet and DARTS, over benchmark datasets including CIFAR10, Fashion-MNIST, ImageNet, STL-10, and SVHN. ASGSSD is the first framework that could automatically deliver compact sub-networks upon general DNNs to the best of our knowledge. Meanwhile the sub-networks exhibit competitive even superior performance to the full networks. 2 RELATED WORK **Automatic Search-Space Generation for Neural Architecture Search (NAS).** One main pain-point of the existing NAS methods (Zoph & Le, 2016; Pham et al., 2018; Zoph et al., 2018; Liu et al., 2018; Chen et al., 2019; Xu et al., 2019; Yang et al., 2020; Hosseini & Xie, 2022) is the need of manually establishing the search space. The definition of search space is varying upon different NAS scenarios. In our scenario, we aim to automatically discovering a high-performing compact sub-network given a general DNN. The starting DNN is assumed to cover all operation and connection candidates, and the resulting sub-network serves as its sub-computational-graph. Therefore, the search space of our scenario is defined as a set of removal structures of the given DNN. It is noteworthy that the automated search space generation for our target NAS scenario along with a novel end-to-end automated pipeline is a crucial gap in the NAS realm that has seen rare exploration. There exists orthogonal search-space (super-network) definitions, along with several works in automation. In the context of (Munoz et al., 2022; Radosavovic et al., 2020; Calhas et al., 2022; Chen et al., 2023; Fang et al., 2023), the presence of operators in DNNs is preserved, yet their inherent hyperparameters, such as channel, stride and depth for convolutional layers, are searchable. Consequently, the inherent hyperparameters of the existing operators constitute their search space. Zhou et al. (2021) defines the search space as the network that encompasses all candidate operations. and investigate methods to automatically generate high-quality super-networks that include optimal sub-networks. Our approach stays complementary and distinct to these definitions and could operate with them jointly to form the landscape of automated search-space generation. **Neural Architecture Optimization.** Due to the need of a starting DNN for ASGSSD to search sub-networks, another related realm is the optimization over pre-specified neural architecture. NAO (Luo et al., 2018) encodes the DNN architecture into a latent representation, search over the latent space, then decodes back to a revised architecture. NAT (Guo et al., 2019) performs operator transformation upon the given DNN to produce more accurate network. These approaches transform and improve the existing DNNs, yet not search an optimal sub-network. As a result, their produced networks are typically not significantly compact compared to the baseline models. Contrarily, our approach focuses on automatically and effectively discovering compact sub-networks given pre-specified DNNs. ### 3 ASGSSD ASGSSD is an automated one-shot system designed to train a general DNN and subsequently construct a sub-network. The resulting sub-network is not only high-performing but also has a remarkably compact architecture, making it well-suited for various deployment environments. The entire process of ASGSSD significantly reduces the necessity for human intervention and is compatible with a wide range of DNNs and applications. As outlined in Algorithm 1, ASGSSD takes a starting DNN $M$, explores its trace graph, examines the inherent hierarchy, and autonomously constructs a search space (Section 3.1). Based on the hierarchy presented within the search space, corresponding trainable variables are segregated into a series of groups, adhering to structural constraints. Subsequently, a hierarchical structured sparsity optimization problem is articulated and addressed through a novel approach—Hierarchical Half-Space Projected Gradient (H2SPG) (Section 3.2). H2SPG takes into account the hierarchy embedded within the generated search space and calculates a solution that achieves both high performance and the desired sparsity level. Ultimately, a compact sub-network $M^*$ is constructed by eliminating the structures associated with identified redundant structures and their dependent modules (Section 3.3). **Algorithm 1 Outline of ASGSSD.** 1. **Input:** A general DNN $M$ to be trained and searched (no need to be pretrained). 2. **Automated Search Space Generation.** Analyze the trace graph of $M$, generate a search space, and partition the trainable parameters into a set of groups obeying the hierarchy of search space. 3. **Train by H2SPG.** Seek a high-performing solution with hierarchical group sparsity. 4. **Automated Sub-Network Construction.** Construct a sub-network $M^*$ upon H2SPG solution. 5. **Output:** Constructed sub-network $M^*$. (Post fine-tuning is optional). ### 3.1 AUTOMATED SEARCH SPACE GENERATION The initial step of ASGSSD is to automatically generate a search space for a general DNN, which definition is varying upon distinct NAS scenarios (see more in Section 2). In our context, the search space is defined as the set of structures that can be omitted from the given DNN while ensuring that the remaining network continues to function normally, i.e., being a valid DNN. We refer to such structures as the removal structures of DNNs. Consequently, the generation of the search space is formulated as the discovery of these removal structures. This process poses significant challenges, encompassing both engineering developments and algorithmic designs. These challenges arise due to the intricate architecture of DNNs, the distinct roles of operators, and a scarcity of sufficient public APIs. To address these challenges and accomplish our goal, we have developed a dedicated graph algorithm stated as Algorithm 2. The generation of search space involves two main phases. The first phase explores the trace graph of the DNN $M$ and establishes a segment graph $(V_s, E_s)$. The second phase leverages the affiliations inside the segment graph to find out removal structures, then partitions their trainable variables to a set of groups. For intuitive illustrations, we elaborate the algorithm through a small but complex demo DNN depicted in Figure 2a. **Segment Graph Construction.** Given a general DNN $M$, we first construct its The trace graph $(V, E)$ displayed as Figure 2a (line 3 in Algorithm 2), which is a directed acyclic graph that tracks the data flow of DNN forward pass, via Pytorch API (Paszke et al., 2019), where $V$ represents the set... Figure 2: Automated Search Space Generation. (a) The DemoNet to be trained and searched; (b) the constructed segment graph; and (c) the trainable variable partition, where $G_s$ represents the variable groups corresponding to removal structures. $\hat{K}_i$ and $b_i$ are the flattened filter matrix and bias vector for Conv-$i$, respectively. $\gamma_i$ and $\beta_i$ are the weight and bias vectors for BN-$i$. $W_i$ is the weight matrix for Linear-$i$. The columns of $\hat{K}_6$ are marked in accordance to its incoming segments. Algorithm 2 Automated Search Space Generation. 1: **Input:** A super-network $M$ to be trained and searched. 2: **Segment graph construction.** 3: Construct the trace graph $(V, E)$ of $M$. 4: Initialize an empty graph $(V_s, E_s)$. 5: Initialize queue $Q \leftarrow \{S(v) : v \in V \text{ is adjacent to the input of trace graph}\}$. 6: **while** $Q \neq \emptyset$ **do** 7: Dequeue the head segment $S$ from $Q$. 8: Grow $S$ in the depth-first manner till meet either joint vertex or multi-outgoing vertex $\hat{v}$. 9: Add segments into $V_s$ and connections into $E_s$. 10: Enqueue new segments into the tail of $Q$ if $\hat{v}$ has outgoing vertices. 11: **Discovery of removal structures.** 12: Get the incoming vertices $\hat{V}$ for joint vertices in the $(V_s, E_s)$. 13: Group the trainable variables in the vertex $v \in \hat{V}$ as $g_v$. 14: Form $G_s$ as the union of the above groups, i.e., $G_s \leftarrow \{g_v : v \in \hat{V}\}$. 15: Form $G^C_s$ as the union of the trainable variables in the remaining vertices. 16: **Return** trainable variable partition $G = G_s \cup G^C_s$ and segment graph $(V_s, E_s)$. of vertices (operations) and $E$ represents the connections among them. We particularly refer vertices as **joint vertices if they aggregate multiple inputs into a single output**, e.g., Add and Concat. We then analyze the trace graph $(V, E)$ to create a segment graph $(V_s, E_s)$, wherein each vertex in $V_s$ serves as a potential removal structure candidate. To proceed, we use a queue container $Q$ to track the candidates (line 5 of Algorithm 2). The initial elements of this queue are the vertices that are directly adjacent to the input of $M$, such as Conv1. We then traverse the graph in the breadth-first manner, iteratively growing each element (segment) $S$ in the queue until a valid removal structure candidate is formed. The growth of each candidate follows the depth-first search to recursively expand $S$ until the current vertices are considered as endpoints. The endpoint vertex is determined by whether it is a joint vertex or has multiple outgoing vertices, as indicated in line 8 of Algorithm 2. Intuitively, a joint vertex has multiple inputs, which means that the DNN may be still valid after removing the current segment. This suggests that the current segment may be removable. On the other hand, a vertex with multiple outgoing neighbors implies that removing the current segment may cause some of its children to miss the input tensor. For instance, removing Conv1-BN1 would cause Conv2, MaxPool and AvgPool to become invalid due to the absence of input in Figure 2a. Therefore, it is risky to remove such candidates. Once the segment S has been grown, new candidates are initialized as the outgoing vertices of the endpoint and added into the container Q (line 10 in Algorithm 2). Such procedure is repeated until the end of traversal and returns a segment graph (V_s, E_s) in Figure 2b. Discovery of Removal Structures. We proceed to identify the removal structures in (V_s, E_s) to generate the search space. The qualified instances are the vertices in V_s that have trainable variables and all of their outgoing vertices are joint vertices. This is because a joint vertex has multiple inputs and remains valid even after removing some of its incoming structures, as indicated in line 12 in Algorithm 2. Consequently, their trainable variables are grouped together into G_s (line 13-14 in Algorithm 2 and Figure 2c). The remaining vertices are considered as either unremovable or belonging to a large removal structure, which trainable variables are grouped into the G^C_s (the complementary to G_s). As a result, for the given DNN M, all its trainable variables are encompassed by the union G = G_s ∪ G^C_s, and the corresponding removal structures to the variable groups G_s constitute the search space of M. The next is to discover important removal structures to form an optimal sub-network. 3.2 Hierarchical Half-Space Projected Gradient (H2SPG) Given a general DNN M and its variable group partition G = G_s ∪ G^C_s, the next is to jointly search for a valid sub-network M* that exhibits the most significant performance and train it to high performance. Searching a sub-network is equivalent to identifying the redundant groups of variables in the removal variable groups G_s to be further removed and ensures the remaining network still valid. Training the sub-network becomes optimizing over the remaining important groups in G to achieve high performance. We formulate a hierarchical structured sparsity problem to accomplish both tasks. \[ \text{minimize } f(x), \quad \text{s.t.} \quad \text{Cardinality}(G^0) = K, \quad \text{and} \quad (V_s/V_{G^0}, E_s/E_{G^0}) \text{ is valid}, \] where \(f\) is the prescribed loss function, \(G^0 := \{g \in G_s | x_g = 0\}\) is the set of zero groups in G_s, which cardinality measures its size. K is the target hierarchical group sparsity, indicating the number of removal structures that should be identified as redundant. The trainable variables in redundant removal structures are projected onto zero, while the trainable variables in important structures are preserved as non-zero and optimized for high performance. A larger K dictates a higher sparsity level that produces a more compact sub-network with fewer FLOPs and parameters. \((V_s/V_{G^0}, E_s/E_{G^0})\) refers to the graph removing vertices and edges corresponding to zero groups G^0. It being valid requires the zero groups distributed obeying the hierarchy of the segment graph to ensure the resulting sub-network functions correctly. Problem (1) is difficult to solve due to the non-differential and non-convex sparsity constraint and the graph validity constraint. Existing optimizers such as HSPG (Chen et al., 2020; Dai et al., 2023) and proximal methods (Deleu & Bengio, 2021) overlook the architecture evolution and hierarchy during the sparsity exploration, which is crucial to (1). In fact, they are mainly applied for orthogonal Algorithm 3 Hierarchical Half-Space Projected Gradient 1: **Input:** initial variable \(x_0 \in \mathbb{R}^n\), initial learning rate \(\alpha_0\), target group sparsity \(K\), segment graph \((V_s, E_s)\) and group partition \(G = G_s \cup G^C_s\). 2: **Hierarchical Search Phase.** 4: Initialize remaining segment graph \((\hat{V}, \hat{E}) \leftarrow (V_s, E_s)\). 5: Calculate the saliency score via modular proxy for each \(g \in G_s\) and sort them. 7: for \(g \in G_s\) ordered by saliency scores ascendingly do 8: if \((\hat{V}/\{v_g\}, \hat{E}/E_g)\) is valid and \(|G_r| < K\) then 9: Update \(G_r \leftarrow G_r \cup \{g\}\). 10: Update \((\hat{V}, \hat{E}) \leftarrow (\hat{V}/\{v_g\}, \hat{E}/E_g)\). 11: **Hybrid Training Phase.** 12: for \(t = 0, 1, \ldots\) do 13: Compute gradient estimate \(\nabla f(x_t)\) or its variant. 14: Update \([x_{t+1}]_{G^C_s}\) as \([x_t - \alpha_t \nabla f(x_t)]_{G^C_s}\). 15: Perform Half-Space projection over \([x_t]_{G_r}\). 16: **Return** the final or the best iterate as \(x^*_{H2SPG}\). Figure 3: Check validity of redundant candidates. Target group sparsity $K = 3$. Conv7-BN7 has smaller salience score than Conv2-BN2. Dotted vertices are marked as redundant candidates. and distinct pruning tasks, where the connections and operations are preserved yet become slimmer. Consequently, employing them onto (1) usually produces invalid sub-networks. **Outline of H2SPG.** To effectively solve problem (1), we propose a novel H2SPG to consider the hierarchy and ensure the validity of graph architecture after removing redundant vertices during the optimization process. To the best of our knowledge, H2SPG is the first optimizer that successfully solves such hierarchical structured sparsity problem (1), which outline is stated in Algorithm 3. H2SPG is a hybrid multi-phase optimizer, distinguished by its dedicated designs catering to the hierarchical constraint, positioning it significantly apart from its non-hierarchical counterparts within the HSPG sparse optimizer family (Chen et al., 2020; 2023; Dai et al., 2023). Initially, H2SPG categorizes groups of variables into important and potentially redundant segments through a hierarchical search phase. Subsequently, it applies specified updating mechanisms to different segments to achieve a solution with both desired hierarchical group sparsity and high performance via a hybrid training phase. The hierarchical search phase considers the topology of segment graph $(V_s, E_s)$ to ensure the validity of the resulting sub-network. Vanilla stochastic gradient descent (SGD) or its variant such as Adam (Kingma & Ba, 2014) optimizes the important segments to achieve the high performance. Half-space gradient descent (Chen et al., 2020) identifies redundant segments among the candidates and projects them onto zero without sacrificing the objective function to the largest extent. **Hierarchical Search Phase.** To proceed, H2SPG first computes the saliency scores for each removal structures in the generated search space (line 5 in Algorithm 3). The saliency score measures the importance of each removal structures in the search space to form an optimal sub-network. It design and calculation are modular to varying proxies, e.g., gradient-based proxies or training-free zero-shot proxies (Lin et al., 2021; Chen et al., 2021b; Li et al., 2023) upon the need of downstream tasks. If fidelity is the main focus, the score could measure from the optimization perspective. If efficiency on hardware is the main focus, the score could favor more on hardware. We by default proceed the gradient-based proxy due to its flexibility on general applications and DNNs. In particular, we first warm up all variables by conducting SGD or its variants. During the warm-up, a salience score of each group $g \in G_s$ is computed and exponentially averaged. Smaller salience score indicates the group exhibits less prediction power, thus may be redundant. By default, we followed DHSPG (Chen et al., 2023) to consider both the cosine similarity between negative gradient $-\nabla f(x)_g$ and the projection direction $-x_g$ as well as the average variable magnitude. The former one measures the approximate degradation onto the objective function over the projection direction. Lower cosine similarity implies the projection $-x_g$ might dramatically regress the objective function thereby current structure is more important. The latter one measures the distance to the origin. The next is to form a set of redundant removal structure candidates $G_r$ and ensures the validity of remaining DNN after erasing these candidates (line 6-10 in Algorithm 3). To proceed, we iterate each group in $G_s$ in the ascending order of salience scores. A remaining graph $(\hat{V}, \hat{E})$ is constructed by iteratively removing the vertex of each group along with the corresponding adjacent edges from $(V_s, E_s)$. The sanity check verifies whether the graph $(\hat{V}, \hat{E})$ is still connected after the erosion. If so, the variable group for the current vertex is added into $G_r$; otherwise, the subsequent group is turned into considerations. As illustrated in Figure 3, though Conv7-BN7 has a smaller salience score than Conv2-BN2, Conv2-BN2 is marked as potentially redundant but not Conv7-BN7 since there is no path connecting the input and the output of the graph after removing Conv7-BN7. This mechanism largely guarantees that even if all redundant candidates are erased, the resulting sub-network is still functioning as normal. The complementary groups with higher redundancy scores are marked as important groups and form $G^C_r := G/G_r$. Hybrid Training Phase. H2SPG then engages into the hybrid training phase to produce desired group sparsity over $\mathcal{G}_r$ and optimize over $\mathcal{G}_C^r$ for pursuing excellent performance till the convergence. This phase mainly follows (Chen et al., 2023; Dai et al., 2023), and is briefly described for completeness. In general, for the important groups of variables in $\mathcal{G}_C^r$, the vanilla SGD or its variant is employed to minimize the objective function (line 13-14 in Algorithm 3). For redundant group candidates in $\mathcal{G}_r$, a Half-Space projection step introduced in (Chen et al., 2020) is proceeded to progressively yield sparsity without sacrificing the objective function to the largest extent. Finally, a high-performing solution $x_{H2SPG}^*$ with desired hierarchical sparsity is returned. 3.3 Automated Sub-Network Construction. We finally construct a sub-network $\mathcal{M}^*$ upon the given DNN $\mathcal{M}$ and the solution $x_{H2SPG}^*$ by H2SPG. The solution $x_{H2SPG}^*$ should attain desired target hierarchical group sparsity level and achieve high performance. As illustrated in Figure 4, we first traverse the graph to remove the entire vertices and the related edges from $\mathcal{M}$ corresponding to the redundant removal structures being zero, e.g., Conv2–BN2, MaxPool1–Conv3–BN3 and Conv8–BN8 are removed due to $|x_{H2SPG}^*|_{g_2 \cup g_3 \cup g_8} = 0$. Then, we traverse the graph in the second pass to remove the affiliated structures that are dependent on the removed vertices to keep the remaining operations valid, e.g., the first and second columns in $\tilde{\mathcal{K}}_6$ are erased since its incoming vertices Conv2–BN2 and MaxPool–Conv3–BN3 have been removed (see Figure 4b). Next, we recursively erase unnecessary vertices and isolated vertices. Isolated vertices refer to the vertices that have neither incoming nor outgoing vertices. Unnecessary vertices refer to the skippable operations, e.g., Concat and Add (between Conv7 and AvgPool) become unnecessary. Ultimately, a compact sub-network $\mathcal{M}^*$ is constructed as shown in Figure 4c. Fine-tuning the constructed sub-network $\mathcal{M}^*$ is optional and often not necessary, particularly if the removed structures additionally exhibit the zero-invariant property (Chen et al., 2021a). More experimental results and ablation studies are present in Appendix D. 4 Numerical Experiments In this section, we employ ASGSSD to one-shot automatically train and search within general DNNs to construct compact sub-networks with high performance. The numerical demonstrations cover extensive DNNs including DemoNet shown in Section 3, RegNet (Radosavovic et al., 2020), StackedUnets (Ronneberger et al., 2015), SuperResNet (He et al., 2016; Lin et al., 2021), and DARTS (Liu et al., 2018), and benchmark datasets, including CIFAR10 (Krizhevsky & Hinton, 2009), Fashion-MNIST (Xiao et al., 2017), ImageNet (Deng et al., 2009), STL-10 (Coates et al., 2011) and SVNH (Netzer et al., 2011). More implementation details of experiments and ASGSSD library and limitations are provided in Appendix A. DemoNet on Fashion-MNIST. We first experiment with the DemoNet presented as Figure 2a on Fashion-MNIST. ASGSSD automatically establishes a search space of DemoNet and partitions its trainable variables into a set of groups. H2SPG then trains DemoNet from scratch and computes a solution of high performance and hierarchical group-sparsity over the generated search space, which Table 1: ASGSSD on extensive super-networks and datasets. | Backend | Dataset | Method | FLOPs (M) | # of Params (M) | Top-1 Acc. (%) | |---------------|-------------|----------|-----------|-----------------|----------------| | DemoNet | Fashion-MNIST | Baseline | 209 | 0.82 | 84.9 | | DemoNet | Fashion-MNIST | ASGSSD | 107 | 0.45 | 84.7 | | StackedUnets | SVNH | Baseline | 184 | 0.80 | 95.3 | | StackedUnets | SVNH | ASGSSD | 115 | 0.37 | 96.1 | is further utilized to construct a compact sub-network as presented in Figure 4c. As shown in Table 1, compared to the super-network, the sub-network utilizes 54% of parameters and 51% of FLOPs to achieve a Top-1 validation accuracy 84.7% which is negligibly lower than the super-network by 0.2%. **StackedUnets on SVNH.** We then consider a StackedUnets over SVNH. The StackedUnets is constructed by stacking two standard Unets (Ronneberger et al., 2015) with different down-samplers together, as depicted in Figure 5a in Appendix E. We employ ASGSSD to automatically build the search space and train by H2SPG. H2SPG identifies and projects the redundant structures onto zero and optimizes the remaining important ones to attain excellent performance. As displayed in Figure 5c, the right-hand-side Unet is disabled due to node-72–node-73–node-74–node-75 identified as redundant. The path regarding the deepest depth for the left-hand-side Unet, i.e., node-13–node-14–node-15–node-19, is marked as redundant as well. The results by ASGSSD indicate that the performance gain brought by either composing multiple Unets in parallel or encompassing deeper scaling paths is not significant. ASGSSD also validates the human design since a single Unet with properly selected depths have achieved remarkable success in numerous applications (Ding et al., 2022; Weng et al., 2019). Furthermore, as presented in Table 1, the sub-network built by ASGSSD uses 0.37M parameters and 115M FLOPs which is noticeably lighter than the full StackedUnets meanwhile significantly outperforms it by 0.8% in validation accuracy. Table 2: ASGSSD over SuperResNet on CIFAR10. | Architecture | Type | Search Space | Top-1 Acc (%) | # of Params (M) | Search Cost (GPU days) | |-------------------|--------------|--------------|---------------|-----------------|------------------------| | Zen-Score-2M (Lin et al., 2021) | Zero-Shot | ResNet Pool | 97.5 | 2.0 | 0.5 | | TENAS (Chen et al., 2021b) | Zero-Shot | DARTS | 97.4 | 3.8 | 0.04 | | SANAS-DARTS (Hosseini & Xie, 2022) | Gradient | DARTS | 97.5 | 3.2 | 1.2† | | ISTA-NAS (He et al., 2020) | Gradient | DARTS | 97.5 | 3.3 | 0.1 | | CDEP (Rieger et al., 2020) | Gradient | DARTS | 97.2 | 3.2 | 1.3† | | DARTS (2nd order) (Liu et al., 2018) | Gradient | DARTS | 97.2 | 3.1 | 1.0 | | PrDARTS (Zhou et al., 2020) | Gradient | DARTS | 97.6 | 3.4 | 0.2 | | P-DARTS (Chen et al., 2019) | Gradient | DARTS | 97.5 | 3.6 | 0.3 | | PC-DARTS (Xu et al., 2019) | Gradient | DARTS | 97.4 | 3.9 | 0.1 | | **ASGSSD** | Gradient | "SuperResNet" | 97.5 | 2.0 | 0.1 | The search cost is measured on an NVIDIA A100 GPU. † Numbers are approximately scaled based on (Hosseini & Xie, 2022). **SuperResNet on CIFAR10.** Later on, we switch to a ResNet search space inspired by ZenNAS (Lin et al., 2021), referred to as SuperResNet. ZenNAS (Lin et al., 2021) uses a ResNet pool to populates massive ResNet candidates and ranks them via zero-shot proxy. Contrarily, we independently construct SuperResNet by stacking several super-residual blocks with varying depths. Each super-residual blocks contain multiple Conv candidates with kernel sizes as 3x3, 5x5 and 7x7 separately in parallel (see Figure 7a). SuperResNet includes the optimal architecture derived from ZenNAS and aims to discover the most suitable sub-networks using H2SPG over the automated generated search space. The sub-network produced by ASGSSD could reach the benchmark over 97% validation accuracy. Remark here that ASGSSD and ZenNAS use fewer parameters to achieve competitive performance to the DARTS benchmarks. This is because of the extra data-augmentations such as MixUp (Zhang et al., 2017) by ZenNAS, so as ASGSSD to follow the same training settings. **DARTS (14-Cells) on ImageNet.** We now present the benchmark DARTS network stacked by 14 cells on ImageNet. We employ ASGSSD over it to automatically figure out the search space which the code base required specified handcraftness in the past, train by H2SPG to figure out redundant structures, and construct a sub-network as depicted in Figure 8d. Quantitatively, we observe that the sub-network produced by ASGSSD achieves competitive top-1/5 accuracy compared to other state-of-the-arts as presented in Table 3. Remark here that it is engineeringly difficult yet to inject architecture variables and build a multi-level optimization upon a search space being automatically constructed and globally searched. The single-level H2SPG does not leverage a validation set. Table 3: ASGSSD over DARTS on ImageNet and comparison with state-of-the-art methods. | Architecture | Test Acc. (%) | # of Params (M) | FLOPs (M) | Search Method | |-------------------------------|---------------|-----------------|-----------|---------------| | DARTS (2nd order) (CIFAR10) | 73.5 | 91.3 | 4.7 | 574 | Gradient | | P-DARTS (CIFAR10) | 75.6 | 92.6 | 4.9 | 557 | Gradient | | PC-DARTS (CIFAR10) | 74.9 | 92.2 | 5.3 | 586 | Gradient | | SANAS (CIFAR10) | 75.2 | 91.7 | – | – | Gradient | | ProxylessNAS (ImageNet) | 75.1 | 92.5 | 7.1 | 465 | Gradient | | PC-DARTs (ImageNet) | 75.8 | 92.7 | 5.3 | 597 | Gradient | | ISTA-NAS (ImageNet) | 76.0 | 92.9 | 5.7 | 638 | Gradient | | MASNAS (ImageNet) | 74.7 | – | 2.6 | – | Multi-Agent | | MixPath (ImageNet) | 77.2 | 93.5 | 5.1 | – | Gradient | | **ASGSSD on DARTS (ImageNet)**| 75.9 | 92.8 | 4.9 | 552 | Gradient | (CIFAR10) / (ImageNet) refer to using either CIFAR10 or ImageNet for searching architecture. and specified auxiliary architecture variables as others to conduct multi-level optimization to favor architecture search and search over the operations without trainable variables, e.g., skip connection. Consequently, our achieved accuracy does not outperform PC-DARTS and ISTA-NAS. We leave further improvement over automated multi-level optimization establishment as future work. **Ablation Study (RegNet on CIFAR10).** We finally conduct ablation studies over RegNet (Radosavovic et al., 2020) on CIFAR10 to demonstrate the necessity and efficacy of hierarchical sparse optimizer H2SPG compared to the existing non-hierarchical sparse optimizers, which is the key to the success of ASGSSD. Without loss of generality, we employ ASGSSD over the RegNet-800M which has accuracy 95.01% on CIFAR10, and compare with the latest variant of HSPG, i.e., DHSPG (Chen et al., 2023). We evaluate them with varying target hierarchical group sparsity levels in problem (1) across a range of \{0.1, 0.3, 0.5, 0.7, 0.9\}. As other experiments, ASGSSD automatically constructs its search space, trains via H2SPG or DHSPG, and establishes the sub-networks without fine-tuning. The results are from three independent tests under different random seeds, and reported in Table 4. Table 4: ASGSSD on RegNet on CIFAR10. | Backend | Method | Optimizer | Target Sparsity | # of Params (M) | Top-1 Acc. (%) | |-------------|--------|-----------|-----------------|-----------------|---------------| | RegNet-800M | ASGSSD | DHSPG | 0.1 | 5.56 ± 0.02 | 95.26 ± 0.13 | | | | | 0.3 | (3.40, X, X) | (95.01, X, X) | | | | | 0.5 | (X, X, X) | (X, X, X) | | | | | 0.7 | (X, X, X) | (X, X, X) | | | | | 0.9 | (X, X, X) | (X, X, X) | | RegNet-800M | ASGSSD | H2SPG | 0.1 | 5.56 ± 0.01 | 95.30 ± 0.10 | | | | | 0.3 | 3.54 ± 0.15 | 95.06 ± 0.14 | | | | | 0.5 | 1.83 ± 0.09 | 94.63 ± 0.19 | | | | | 0.7 | 1.16 ± 0.12 | 91.92 ± 0.24 | | | | | 0.9 | 0.82 ± 0.17 | 87.91 ± 0.32 | **Sub-networks by ASGSSD versus Full Networks.** The sub-networks under varying hierarchical group sparsity levels computed by ASGSSD with H2SPG exhibits the Pareto frontier comparing with the benchmark RegNets. Notably, the sub-networks under sparsity levels of 0.1 and 0.3 outperform the full RegNet-800M. Furthermore, the ones with 0.5 sparsity level outperforms the RegNet(200M-600M), despite utilizes significantly fewer parameters while achieves higher accuracy. **H2SPG versus Other Sparse Optimizers.** DHSPG often fails when confronts with reasonably large target sparsity levels, denoted by the symbol \(X\). The underlying reason lies in its design, which solely treats problem (1) as an independent and disjoint structured sparsity problem. By disregarding the hierarchy within the network, DHSPG easily generates invalid sub-networks. Conversely, H2SPG takes into account the network hierarchy and successfully addresses problem (1). We also compare with a proximal method equipping with our hierarchical search phase, i.e., ProxSG+. Its performance is not competitive to H2SPG due to their ineffective sparse exploration ability (Dai et al., 2023). **5 CONCLUSION** We propose ASGSSD, which is the pioneering automated system to establish search spaces for general DNNs and generates high-performing and compact sub-networks through a novel H2SPG. Remarkably, H2SPG stands as the first optimizer to address hierarchical structured sparsity problems for deep learning tasks. ASGSSD significantly minimizes the manual efforts associated with many existing NAS works and pioneers a new trajectory. It also establishes benchmarks regarding automated NAS over general DNNs, which currently requires extensive handcraftness to create search spaces. REFERENCES Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. *arXiv preprint arXiv:1812.00332*, 2018. Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train one network and specialize it for efficient deployment. *arXiv preprint arXiv:1908.09791*, 2019. David Calhas, Vasco M Manquinho, and Ines Lynce. Automatic generation of neural architecture search spaces. In *Combining Learning and Reasoning: Programming Languages, Formalisms, and Representations*, 2022. Tianyi Chen, Guanyi Wang, Tianyu Ding, Bo Ji, Sheng Yi, and Zhihui Zhu. Half-space proximal stochastic gradient method for group-sparsity regularized problem. *arXiv preprint arXiv:2009.12078*, 2020. Tianyi Chen, Bo Ji, Tianyu Ding, Biyi Fang, Guanyi Wang, Zhihui Zhu, Luming Liang, Yixin Shi, Sheng Yi, and Xiao Tu. Only train once: A one-shot neural network training and pruning framework. In *Advances in Neural Information Processing Systems*, 2021a. Tianyi Chen, Luming Liang, DING Tianyu, Zhihui Zhu, and Ilya Zharkov. Otov2: Automatic, generic, user-friendly. In *The Eleventh International Conference on Learning Representations*, 2023. Wuyang Chen, Xinyu Gong, and Zhangyang Wang. Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective. *arXiv preprint arXiv:2102.11535*, 2021b. Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 1294–1303, 2019. Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. Progressive darts: Bridging the optimization gap for nas in the wild. *International Journal of Computer Vision*, 129:638–655, 2021c. Xiangxiang Chu, Shun Lu, Xudong Li, and Bo Zhang. Mixpath: A unified approach for one-shot neural architecture search. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 5972–5981, 2023. Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In *Proceedings of the fourteenth international conference on artificial intelligence and statistics*, pp. 215–223. JMLR Workshop and Conference Proceedings, 2011. Yutong Dai, Tianyi Chen, Guanyi Wang, and Daniel Robinson. An adaptive half-space projection method for stochastic optimization problems with group sparse regularization. *Transactions on Machine Learning Research*, 2023. Tristan Deleu and Yoshua Bengio. Structured sparsity inducing adaptive optimizers for deep learning. *arXiv preprint arXiv:2102.03869*, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Tianyu Ding, Luming Liang, Zhihui Zhu, Tianyi Chen, and Ilya Zharkov. Sparsity-guided network design for frame interpolation. *arXiv preprint arXiv:2209.04551*, 2022. Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Efficient multi-objective neural architecture search via lamarckian evolution. *arXiv preprint arXiv:1804.09081*, 2018. Gongfan Fang, Xinyin Ma, Mingli Song, Michael Bi Mi, and Xinchao Wang. Depgraph: Towards any structural pruning. *arXiv preprint arXiv:2301.12900*, 2023. Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. *Deep learning*, volume 1. MIT press Cambridge, 2016.
9rV9cp7KRH
Could you elaborate on the specific meaning and implications of “prior works has often focused on designing an incentive as a separate problem based on an existing collaboration scheme, instead of treating incentive as part of the learning itself”?
Incentivized Collaborative Learning: Architectural Design and Insights Anonymous authors Paper under double-blind review Abstract Collaborations among various entities, such as companies, research labs, AI agents, and edge devices, have become increasingly crucial for achieving machine learning tasks that cannot be accomplished by a single entity alone. This is likely due to factors such as security constraints, privacy concerns, and limitations in computation resources. As a result, collaborative learning (CL) research has been gaining momentum. However, a significant challenge in practical applications of CL is how to effectively incentivize multiple entities to collaborate before any collaboration occurs. In this study, we propose ICL, an architectural framework for incentivized collaborative learning, and provide insights into the critical issue of when and why incentives can improve collaboration performance. Then, we apply the concepts of ICL to specific use cases in federated learning, assisted learning, and multi-armed bandit, corroborated with both theoretical and experimental results. 1 Introduction Over the past decade, artificial intelligence (AI) has achieved significant success in engineering and scientific domains, e.g., robotic control (Deisenroth et al., 2013; Kober & Peters, 2014), natural language processing (Li et al., 2016; Bahdanau et al., 2017), computer vision (Liu et al., 2017; Brunner et al., 2017), and finance (Lee et al., 2020; Lussange et al., 2021). With this trend, a growing number of entities, e.g., governments, hospitals, companies, and edge devices, are integrating AI models into their workflows to facilitate data analysis and enhance decision-making. While a variety of standardized machine learning models are readily available for entities to implement AI tasks, model performance heavily depends on the quality and availability of local training data, models, and computation resources (Goodfellow et al., 2016). For example, a local bank’s financial model may be constrained by the small size of its subjects and the number of feature variables. However, this bank could possibly improve its model by integrating additional observations and feature variables from other banks or industry sectors. Therefore, there is a strong need for collaborative learning that allows entities to enhance their model performance while respecting the proprietary nature of local resources. This has motivated recent research on learning frameworks, such as Federated Learning (FL) (Konecny et al., 2016; McMahan et al., 2017; Diao et al., 2021b; 2022c) and Assisted Learning (AL) (Xian et al., 2020b; Chen et al., 2023b; Diao et al., 2022b; Wang et al., 2022), which can improve learning performance from distributed data. 1.1 Design Goals of Incentivized Collaborative Learning Machine learning entities, similar to humans, can collaborate to accomplish tasks that benefit each participant. However, these entities possess local machine learning resources that can be highly heterogeneous in terms of training procedures, computation cost, sample size, and data quality. A key challenge in facilitating such collaborations is understanding the motivations and incentives that drive entities to participate in the first place. For example, a recent study on parallel assisted learning (Wang et al., 2022, Section III.D) has demonstrated cases where two entities, Alice and Bob, collaborate to improve the performance of their distinct tasks simultaneously. However, it may be the case that Alice can assist Bob more than the other way around. In such scenarios, an effective incentive mechanism is crucial for facilitating a “benign” collaboration in which high-quality entities are suitably motivated to maximize the overall benefit. The need to deploy collaborative learning systems in the real world has motivated recent research in incentivized FL. The existing literature has studied different aspects of incentives in particular application cases, e.g., using contract theory to set the price for participating in FL, or evaluating contributions for reward or penalty allocation, which we will review in Subsection 1.2. However, understanding when and why incentive mechanism design can enhance collaboration performance is under-explored. This work aims to address this problem by developing an incentivized collaborative learning framework and general principles that can be applied to many use cases. Three examples are given below to show the potential use scenarios to be revisited in later sections. **Example 1.** Multiple clinics hold data on different patients. They must decide whether to participate in an FL platform to improve their local prediction performance. Specifically, each participating clinic will have the chance to be selected as an active participant to realize an FL-updated model using their local data and computation resources. All participants will receive the updated model. A clinic has the incentive to participate as long as the expected model gain is more valuable than the participation price it must pay for the platform, other participants, or both. From the system perspective, the incentive aims to maximize the model gain or monetary profit from entities’ payments. **Example 2.** Multiple entities collect surveys from the same cohort of customers but in different modalities, e.g., demographic and healthcare information. The user data can be collated by a non-private identifier, e.g., an encrypted username. They will use assisted learning to improve predictability by leveraging multi-modalities without sharing models. The incentive mechanism must enable entities to reach a consensus on who to realize the collaboration to benefit the entities. **Example 3.** An investment team is recruiting strategists. Once recruited, a strategist will be selected by a multi-armed bandit (MAB) rule to realize a reward that will be enjoyed by all the participants. Each strategist will pay or be paid to participate. Under a good incentive, 1) a top-performing strategist may participate because it may be selected and receive collected payment from others, 2) a mediocre strategist may participate to enjoy the reward shared by the top strategist at a relatively smaller participation cost, and 3) a laggard strategist may not participate because of an overly high participation cost, which can in turn benefit the collaborative system by reducing exploration costs. This work establishes an incentivized collaborative learning framework to abstract common application scenarios. In our framework, as illustrated in Fig. 1 and elaborated in Section 2, a set of learners play three roles as the game proceeds: 1) candidates, who decide whether to participate in the game based on a pricing plan, 2) participants, whose final payment, which can be negative if interpreted as a reward, depends on the pricing plan and actual outcomes of collaboration, and 3) active participants, who jointly realize a collaboration gain to be enjoyed by all participants. The system-level goal is to promote high-quality collaborations for an objective. Examples of the collaboration gain include improved models, predictability, and rewards. We will use the following design principles of incentives to benefit collaboration: (1) Each participant can simultaneously play the roles of contributor and beneficiary of the collaboration gain. (2) Each participant will pay to participate in return for a collaboration gain if the improvement over its local gain outweighs its participation cost. (3) The pricing plan determines each entity’s participation cost and can be positive or non-positive, tailored to reward those who contribute positively and charge those who hinder collaboration or disproportionately benefit from it. (4) The system for collaboration may incur a net zero cost, while still engaging entities to achieve the maximum possible collaborative gains. Our framework provides a unified understanding for the modular design of incentives from a system design perspective. We will show how incentives can be used to reduce exploration complexity and create win-win situations for participants from collaboration. **Contribution of this work:** The main contributions are threefold. - First, we propose an architectural framework for incentivized collaborative learning (ICL) along with design principles. These collectively formalize the role of incentives in learning, ensuring that eligible entities are motivated to actively foster collaboration and benefit all the participants. - Second, we apply the proposed framework and principles to three concrete use scenarios, including incentivized FL, AL, and MAB, and quantify the insights. In particular, we demonstrate how the pricing and selection plans can play crucial roles in reducing the cost of exploration in learning, ul- --- **Figure 1:** An overview of the incentivized collaborative learning game Stage 1: Candidates Stage 2: Pricing Plan Stage 3: Selection Plan Stage 4: Active Participants Coordinator Information Payment Gain Collaboration timately creating mutual benefits for all participating entities. We also conduct experimental studies to corroborate the developed theory and insights. • Lastly, the developed ICL framework promotes interoperability among different collaborative learning settings, allowing researchers to adapt the same architecture across various use cases. 1.2 RELATED WORK To address collaborative learning challenges, existing studies have focused on various aspects, such as security (Bhagoji et al., 2019; Bagdasaryan et al., 2020), privacy (Truex et al., 2019; Le et al., 2022), fairness (Yu et al., 2020b,c,a), personalization (Chen et al., 2022; Le et al., 2022; Khan et al., 2023), model heterogeneity (He et al., 2020; Diao et al., 2021b; Mushtaq et al., 2021), and lack-of-labels (Diao et al., 2022c; Mushtaq et al., 2023). However, a fundamental question remains: why would participants want to join collaborative learning in the first place? This has motivated an active line of research to use incentive schemes to enhance collaboration. We briefly review them below. Promoter of incentives Who want to design mechanisms to incentivize participants and initiate a collaboration? From this angle, existing work can be roughly categorized in two classes: server-centric, meaning that a collaboration is initiated by a server who owns the model and aims to incentivize edge devices to join model training (Kang et al., 2019; Pandey et al., 2019; Zeng et al., 2020; Zhan et al., 2020), and participant-centric where the incentives are designed at the participants’ interest (Huang et al., 2022). Different goals of incentives What is the objective of an incentive mechanism design? Most existing work on incentivized collaborative learning, in particular FL, have adopted some common rules for incentive mechanism design, e.g. incentive compatibility and individual rationality (Kang et al., 2019; Zhan et al., 2020). The eventual objective for incentivized collaboration is often maximizing profit from the perspective of the incentive mechanism designer, which is either the coordinator (also called “server”, “platform”) (Kang et al., 2019) or the participants (also called “clients” in FL) (Huang et al., 2022). Another commonly studied objective is maximizing global model performance in FL, which can be commercialized and turned into profit (Zeng et al., 2020; Yu et al., 2020b; Cho et al., 2022). Other objectives being studied include budget balance (Tang & Wong, 2021; Zhang et al., 2022), computational efficiency (Xu et al., 2015; Zhang et al., 2022), fairness (Tay et al., 2022), and Pareto efficiency (Zeng et al., 2020). Components of incentive mechanism design How to implement incentives? The existing work has focused one of the following machinery: 1) pricing, in which participants bid for collaboration (using the auction theory) (Zeng et al., 2020; Zhang et al., 2022) or a coordinating server determines the price (based on contract theory) (Kang et al., 2019; Ye et al., 2022), and 2) payment allocation, based on contribution evaluation (Yang et al., 2022), fair sharing (Carbunar & Tripunitara, 2010; Yu et al., 2020a), rewarding, or local accuracy (Han et al., 2022). Overall, the role of incentives in collaborative learning has inspired many recent studies on bringing economic concepts to design learning platforms. Most existing work has focused on FL, especially mobile edge computing scenarios. Nonetheless, the need for collaboration extends beyond FL, as shown in (Tay et al., 2022) which studied synthetic data generation based on collaborative data sharing, and (Wang et al., 2022) which developed an AL framework where an entity being assisted is bound to assist others based on implicit mechanism design. Moreover, incentives in collaborative learning is under-studied in two critical aspects. Firstly, how to design incentives under a unified architecture, considering the existing work often focuses on specific application scenarios? Secondly, when and why do incentives improve collaboration performance? Prior work has often focused on designing an incentive as a separate problem based on an existing collaboration scheme, instead of treating incentive as part of the learning itself. These gaps motivated this work on ICL. 2 FORMULATION OF INCENTIVIZED COLLABORATIVE LEARNING (ICL) 2.1 OVERVIEW OF ICL AND INTUITIONS We will focus on a generic round and provide an overview of the ICL formulation below. As illustrated in Fig. 1, a collaboration consists of four stages. In Stage 1, the coordinator sets a pricing plan based on prior knowledge of the candidates’ potential gains (e.g., from previous rounds), and each candidate decides whether to be a participant by committing a payment at the end of this round. In Stage 2, the coordinator collects participants’ information (e.g., their estimated gains) and uses a selection plan to choose the active participants. In Stage 3, the active participants collaborate to produce an outcome, which is enjoyed by all participants (including non-active ones). In Stage 4, the coordinator charges according to the pricing plan, the realized collaboration gain, and individual gains of active participants. Here, a gain (e.g., decrease in test loss) is assumed to be a function of the realized outcome (e.g., trained model). 2.2 DETAILED EXPLANATIONS OF THE ICL COMPONENTS The proposed collaborative learning game includes two parties: candidate entities and coordinator. For notational convenience, we will first introduce a single-round game and extend it in Section 3. Candidates: Consider $M$ candidates indexed by $[M] \triangleq \{1, \ldots, M\}$. In an ICL game, each candidate $m$ can potentially produce an outcome $x_m \in X$, such as a model parameter. Any element in $X$ can be mapped to a gain $Z \in \mathbb{R}$, e.g., reduced prediction loss. But such a gain will not necessarily be realized unless the candidate becomes an active collaborator of the game. At the beginning of a game, a candidate will receive a pricing plan from the coordinator specifying the cost of participating in the game and use that to decide whether to become a participant of the game. If a candidate participates, it has the opportunity to be selected as an active participant. All active participants will then collaborate to produce an outcome (e.g., model or prediction protocol), which also generates a collaboration gain. This outcome is distributed among all participants to benefit them. At the end of the game, all participants must pay according to the pre-specified pricing plan, with the actual price depending on the realized collaboration gain. We let $\mathbb{I}_p$ and $\mathbb{I}_a$ denote the set of participants and active participants, respectively (so $\mathbb{I}_a \subseteq \mathbb{I}_p \subseteq [M]$). Given the above, an entity has a consumer-provider bi-profile in the sense that it can serve as a consumer who wishes to benefit from and also a provider who contributes to the collaboration. Coordinator: A coordinator, e.g., company, government agency, or platform, orchestrates the game by performing the following actions in order: determine a pricing plan of the participation costs based on initial information collected from candidates, select active participants from those candidates that have chosen to become participants, realize the collaboration gain, and charge the participants according to the gain. The coordinator can be a virtual entity rather than a physical one. Collaboration gain: Given active participants represented by $\mathbb{I}_a$, the collaborative gain is a function of their individual outcomes, denoted by $G : (x_m, m \in \mathbb{I}_a) \mapsto z_{1_a} \in \mathbb{R}$. This gain will be enjoyed by all participants and the coordinator, e.g., in the form of an improved model distributed by the coordinator. We also use $G : x_m \mapsto z_m \in \mathbb{R}, m \in [M]$, to denote the gain of an individual outcome. Pricing plan: The pricing plan is a function from $\mathbb{R}^{|\mathbb{I}_a|+1}$ to $\mathbb{R}^{|\mathbb{I}_p|}$ that maps the collaboration gain and individual gains of active participants to a cost needed to participate in the game, denoted by $$P : (z, z_m, m \in \mathbb{I}_a) \mapsto (c_j, j \in \mathbb{I}_p),$$ where $z$ denotes the realized collaboration gain. In practice, we may parameterize $P$ so that it is low for active and good-performing entities, medium for non-active entities, and high for active and laggard/disruptive entities, a point we will demonstrate in the experiments. We assume that the active participants will share their individual gains, namely $z_m, m \in \mathbb{I}_a$, so that all other participants’ cost can be evaluated. The $P$ will provide incentives to each candidate to decide to participate or not. As such, we denote the set of participants by $\mathbb{I}_p = \text{Incent}(P)$. Profit: For each party, the profit will consist of two components: monetary profit from participation fees and gain-converted profit from collaboration gains. More specifically, let $c_m$ denote the final participation cost for entity $m$. Let the Utility-Income (UI) function $z \mapsto U(z)$ determine the amount of income uniquely associated with any particular gain $Z$. We suppose the UI function is the same for participants and the system. Then, the profit of client $m$ is $$\text{PROFIT}_m \triangleq 1_{m \in \mathbb{I}_p} \cdot (-c_m + U(z_{1_a}) - U(z_m)),$$ where $z_{1_a} \triangleq G(x_m, m \in \mathbb{I}_a)$, and the last term contrasts with its standalone learning. We define the system-level profit as the overall income from participation, $\sum_{m \in \mathbb{I}_p} c_m$, weighted plus the amount converted from collaboration gain, $U(z_{1_a})$, namely $$\text{PROFIT}_{sys} \triangleq \lambda \sum_{m \in \mathbb{I}_p} c_m + U(z_{1_a}),$$ where $\lambda \geq 0$ is a pre-specified control variable that balances the monetary income and collaboration gain. We can regard the system-level profit as the coordinator’s profit. Remark 1 (Coordinator’s profit). One may put additional constraints on the coordinator’s monetary income $\sum_{m \in I_p} c_m$. A particular case is to restrict that $\sum_{m \in I_p} c_m = 0$, which may be interpreted that the system does not need actual monetary income but rather uses the mechanism for model improvement. This is typical in coordinator-free decentralized learning (to revisit in Section 3.2). Selection plan. The coordinator will select the active participants $I_A$ from $I_p$ based on a set of available information, denoted by $I$. We assume that the $I$ consists of the coordinator’s belief of the distributions of $x_m$ (namely the realizable gain) for each client $m$ in $I_p$. The selection plan is a function that maps from $I$ and $I_p$ to a set $I_A \subseteq I_p$, denoted by $$S : (I, I_p) \mapsto I_A.$$ (4) This can be a randomized map, e.g., when each participant is selected with a certain probability (Section 2.4.2). In practice, $I$ may refer to the coordinator’s estimates of the underlying distribution of $x_m$, $m \in I_p$, based on historical performance on the participant side. Objective of mechanism design. Our objective in designing a collaboration mechanism is to maximize the system-level profit under constraints tied to candidates’ individual incentives, which will be revisited in Section 2.4. The maximization is over the pricing plan $P$ and selection plan $S$. With the earlier discussions, the objective is to solve $$\text{Objective: } \max_{P,S} \mathbb{E}\left\{\lambda \sum_{m \in I_p} c_m + U(z_{I_A})\right\}, \text{ where}$$ (5) $c_m$ is specified by $P$ in (1), $z_{I_A} = G(x_m, m \in I_A)$, $I_A = S(I, I_p)$, s.t. $I_p = \text{Incent}(P)$. (6) We will elaborate on (6) in Section 2.3. Remark 2 (Interpretation of the objective). We discuss three interesting cases of the objective. First, when $\lambda = 1$, the objective is equivalent to maximizing the overall profit. Second, when $\lambda = 0$, the objective is to improve the modeling through a collaboration mechanism. In this case, the system has no interest in the participation income but only provides a platform to incentivize the non-active to pay for the gain obtained by the active. Since the system need not pay for any participants, it is natural to assume the “zero-balance” constraint $\sum_{m \in I_p} c_m = 0$. Thus, we have $$\text{Objective: } \max_{P,S} \mathbb{E}\{U(z_{I_A})\}.$$ (7) Third, as $\lambda \to \infty$, the objective reduces to maximizing the system profit, $\max_{P,S} \mathbb{E}\{\sum_{m \in I_p} c_m\}$. Intuitively, the collaboration gain should still be reasonable to attract sufficiently many participants. Lastly, the following proposition shows that by properly replacing the $\lambda$, the system’s objective can be interpreted as an alternative objective that combines the system’s and participants’ gains. Proposition 1. Let $\lambda' \triangleq \frac{\lambda - 1}{|I_p| + 1}$. The Objective (5) where $\lambda'$ is replaced with $\lambda$ is equivalent to maximizing the average social welfare defined by $(\text{PROFIT}_{\text{sys}} + \sum_{m \in I_p} \text{PROFIT}_m)/(|I_p| + 1)$. 2.3 INCENTIVES OF PARTICIPATION We study the incentives of collaboration from the candidates’ perspectives. First, we will elaborate on (6) here. For each candidate, the incentive to become a participant is the larger profit of receiving the collaboration gain compared with realizing a gain on its own. Then, candidate $m$ has the incentive to participate in the game if and only if $$\text{Incent}_m : \mathbb{E}\{-c_m + U(z_{I_A}) - U(z_m)\} \geq 0,$$ (8) where $z_{I_A}$ and $c_m$ were introduced in (6). Here, $\mathbb{E}$ denotes the expectation regarding the random quantities, including the active participant set and the realized gains. Remark 3 (Inaccurate candidate). A candidate may have its own expectation $\mathbb{E}_m$ in place of $\mathbb{E}$ in (8) when making its decision. In this case, if the candidate is overly confident about the collaboration gain – its expected $z$ tends to be larger than the actual, either intentionally or not – it will participate in the game. The system can have a further screening of it: 1) if this participant is selected as an active participant, it will likely suffer from a penalty since its realized gain will be seen by the coordinator, which will implicitly give feedback as an incentive to that candidate; 2) if not selected, it will become an inactive participant, which will contribute to the system’s profit but not harm the collaboration. In this way, a candidate will have a limited extent to harm the system. 2.4 Mechanism design for the ICL game The idea of mechanism design in economic theory is to devise mechanisms to jointly regulate the decisions of multiple parties in a game to eventually attain a system’s desired goal (see, e.g., (Maskin, 2008)). In our ICL game, the system’s desired goal is to maximize (5), and the mechanisms to design include \( P \) and \( S \). Section 2.3 discussed the incentives from the candidates’ view. This subsection studies the mechanism designs from the system’s perspective. 2.4.1 Pricing plan: from candidates to participants From the system’s view, we can cast the \( M \) candidates and the coordinator as the parties in a game. Consider the following strategy choices of each party. Each candidate \( m \) has two choices: whether to participate or not, represented by \( b_m \triangleq 1_{m \in I_p} \in \{0, 1\} \); the coordinator has a choice of the pricing and selection plans, denoted by \((P, S)\). Following the notation in (4), for a set of participants that exclude \( m \), denoted by \( I_p^{(-m)} \), we let \( I_A^{(-m)} \triangleq S(I_p^{(-m)}) \) and \( I_A \triangleq S(I_p^{(-m)} \cup \{m\}) \). We have the following condition under a Nash equilibrium. **Theorem 1** (Equilibrium condition). The condition to attain Nash equilibrium is \[ \text{Incent}_m : \mathbb{E}\{-c_m + U(z_{I_A}) - U(z_m)\} \geq 0, \quad \text{if and only if } m \in I_p, \] \[ \text{Incent}_\text{sys} : \mathbb{E}\{\lambda c_m + U(z_{I_A}) - U(z_{I_A^{(-m)}})\} \geq 0, \quad \text{if and only if } m \in I_p. \] **Remark 4** (Pricing as a part of the collaborative learning). A critical reader may wonder why not price participants directly based on the realized gains, which we refer to as post-collaboration pricing, e.g., using the Shapley value (Roth, 1988; Khan et al., 2023). The main distinction is that our studied pricing plan can not only generate profit or reallocate resources on the system side but also influence collaboration gains. Specifically, the pricing plan can screen higher-quality candidates to allow the coordinator to improve model performance in the subsequent collaboration. For instance, consider the case where the sole purpose is to maximize collaboration gain, namely \( \lambda = 0 \). In this situation, an entity \( m \) violating the condition in (10) is treated as a laggard, and \( c_m \) can be designed accordingly to ensure this client will not participate, as per violating (9). 2.4.2 Selection plan: from participants to active participants We introduce a general probabilistic selection plan. Assume the information transmitted from participant \( m \) is a distribution of \( x_m \), denoted by \( P_m \) for all \( m \in I_p \). Suppose the system expects to select \( \rho \in (0, 1] \) proportion of the participants. Consider a probabilistic selection plan that will select each client \( m \) in \( I_p \) with probability \( q_m \in [0, 1] \). Let \( q = [q_m]_{m \in I_p} \). We thus have the constraint \[ q \in Q(\rho, I_p) \triangleq \left\{ q : \sum_{m \in I_p} q_m = \rho |I_p| \right\}. \] Let \( b_m \) denote an independent Bernoulli random variable with \( \mathbb{P}(b_m = 1) = q_m \), or \( b_m \sim B(q_m) \). Then, conditional on the existing participants \( I_p \), maximizing any system objective, e.g., (5) and (7), will lead to a particular law of client selection represented by \( q \). For example, for the objective (7), we may solve the following problem. \[ q^* = \arg \max_{q \in Q(\rho, I_p)} U(q) \triangleq \mathbb{E}\{U(z_{I_A}) = U(G(x_m, m \in I_A))\}, \] where the expectation is over \( b_m \sim B(q_m), x_m \in P_m \), and \( I_A = \{m \in I_p : b_m = 1\} \). We will show specific examples in Section 3. The existing works have examined client sampling from perspectives other than incentives, such as minimizing gradient variance (Chen et al., 2020). **Remark 5** (Free-rider and adversarial participants). We will briefly discuss how pricing and selection plans can jointly address concerns regarding free-riders and adversarial participants. A free-rider is an entity \( m \) with a low local gain \( z_m \) but hopes to participate to enjoy the collaboration gain realized by other more capable participants with a relatively small participation cost. To that end, the entity may deliberately inform the system of a poor \( P_m \) so that the system, if following the above-optimized selection plan, will not select it as active. Consequently, the free-rider’s actual local gain will not be revealed and may not suffer a high participation cost. This case motivates us to adopt a random selection to a certain extent in selecting the active participants. More specifically, suppose every participant \( m \in I_p \) will have at least a \( \bar{\rho} > 0 \) probability of being selected to be active. Then, it expects to pay at least \( \bar{\rho} \cdot \mathbb{E}\{c_m\} = \bar{\rho} \cdot P(z, z_i, i \in I_A) \) in return for an additional model gain of \( \mathbb{E}\{U(z_{I_A}) - U(z_m)\} \), where \( I_A \) contains \( m \). Thus, it is not worth the entity \( m \)’s participation should the system design a cost function that meets the following: \( \bar{\rho} \cdot \mathbb{E}\{P(z, z_m, m \in I_A)\} \geq \mathbb{E}\{U(z_{I_A}) - U(z_m)\} \) for all \( z_m \) overly small. For example, the coordinator may impose a high cost whenever the realized local gain \( z_m \) revealed after the collaboration exceeds a pre-specified threshold. On the other hand, there may be an adversarial participant with a poor local gain but informs the system of an excellent \( P_m \) so that the system will select it to be active. In such cases, the same argument regarding the choice of the pricing plan applies, so no adversarial entity would dare to risk paying an excessively high cost after participating in the game. 3 USE CASES OF COLLABORATIVE LEARNING 3.1 ICL FOR FEDERATED LEARNING Federated learning (FL) (Konecny et al., 2016; McMahan et al., 2017; Diao et al., 2021b) is a popular distributed learning framework where the main idea is to learn a joint model using the averaging of locally learned model parameters. Its original goal is to exploit the resources of massive edge devices (also called “clients”) to achieve a global objective orchestrated by a central coordinator (“server”) in a way that the training data do not need to be transmitted. In line with FL, we suppose that at any particular round, the outcome of client \( m, x_m \), represents a model. The collaboration will generate an outcome in an additive form: \( z_{I_A} \triangleq G(x_m, m \in I_A) = G\left(\sum_{i \in I_p} \zeta_i x_i / \sum_{j \in I_p} \zeta_j\right) \), where \( \zeta_i \)'s are the pre-determined unnormalized positive weights associated with all the candidates, e.g., according to the sample size (McMahan et al., 2017) or uncertainty quantification (Chen et al., 2022). Let \( I_p \triangleq [K] \), where \( K \leq M \) is the number of participants, and \( M \) is the number of candidates. 3.1.1 LARGE-PARTICIPATION APPROXIMATION Suppose the selection plan is based on a random sampling of \( I_p \) with a given probability, say \( \rho \in (0, 1) \), for each participant to be active. Assume \( b_m \), for \( m \in [K] \), are IID Bernoulli random variables with \( \mathbb{P}(b_m = 1) = \rho \). Let \( U \circ G \) denote the composition of \( U \) and \( G \), and \( (U \circ G)' \) its derivative. Then, we have the following result to approximate the equilibrium conditions in Section 2.4 under a large number of participating clients. Let \( \zeta_{I_p} \) and \( \bar{x}_{I_p} \) denote all the participants’ average of \( \zeta \) and weighted average of \( X \), namely \( \zeta_{I_p} \triangleq (\sum_{i \in I_p} \zeta_i)/|I_p|, \bar{x}_{I_p} \triangleq (\sum_{i \in I_p} \zeta_i x_i)/(\sum_{i \in I_p} \zeta_i) \). **Theorem 2.** Assume \( \max\{|\zeta_m|, ||\zeta_m x_m||_\infty\} \leq \tau \) for all \( m \in I_p \) and \( \log d/|I_p| \to 0 \) as \( |I_p| \to \infty \), where \( |I_p| \) is the number of participants. Then, the conditions (9) and (10) can be written as \[ \mathbb{E}\{c_m\} \leq \mathbb{E}\{(U \circ G)(\bar{x}_{I_p} + o_p(1)) - U \circ G(x_m)\}, \] \[ \lambda \mathbb{E}\{c_m\} \geq \mathbb{E}\{(U \circ G)'(\bar{x}_{I_p} + o_p(1)) \frac{b_m}{(1 + o_p(1)) \rho} \times \frac{\zeta_m}{K \zeta_{I_p}} (\bar{x}_{I_p} - x_m)\}, \] where \( o_p(1) \) denotes a term whose \( \ell_1 \)-norm converges in probability to zero as \( |I_p| \) goes to infinity. The proof is based on large-sample analysis, e.g., to show that \( (\sum_{i \in I_p} b_i \zeta_i x_i)/(\sum_{j \in I_p} b_j \zeta_j) \) is asymptotically close to \( \bar{x}_{I_p} \) for large \( |I_p| \). From the above result and its proof, the system’s marginal profit increase attributed to an entity \( m \) is approximately \[ b_m \cdot \mathbb{E}\{c_m - (U \circ G)'(\bar{x}_K) \frac{\zeta_m}{K \zeta_K} (\bar{x}_K - x_m)\}. \] 3.1.2 ITERATIVE MECHANISM DESIGN To optimize the mechanism in practice, the server cannot evaluate the collaboration gain \( G(x_m, m \in I_A) \) due to its complex dependency on the individual outcomes \( x_m \). Alternatively, the server can optimize its mechanism by maximizing the sum of the quantities in (13) as a surrogate of the collaboration gain. The determination of the pricing plan will involve multiple candidates’ choices. Since a practical FL system involves multiple rounds, we suppose each client will use the results from the previous rounds to approximate the current strategy set and make the next moves. In Appendix D, we will elaborate on this idea and proposed a concrete incentivized FL algorithm built on Theorem 2. It is envisioned that a reasonable incentive mechanism can enhance the robustness of FL training. To demonstrate this, we develop an experimental study that involves two types of Byzantine attacks: random modification (Xia et al., 2019) and label flipping (Jebreel et al., 2022). Byzantine client ratios are adjusted to two levels: \{0.2, 0.3\}. Among participating clients, the server will select 10% of clients as active clients per round. We studied three incentivized FL (labeled as “ICL”) pricing Figure 2: Learning curves of ICL and FedAvg measured by Accuracy, assessed for random modification (RM), label flipping (LP) attacks, and two malicious client ratios (0.2 and 0.3), applied to the CIFAR10 dataset. plans $\gamma = 1, 2, 3$, where a larger $\gamma$ is interpreted as a higher cost per gain. We visualize the learning curves on the CIFAR10 dataset in Fig. 5. From the results, in comparison with non-incentivized FedAvg, the incentivized FL algorithm from our framework can deliver a higher and more rapidly increasing model performance. On the other hand, ICL with pricing plan 1 underperforms, which is expected as it only imposes a mild penalty on laggard/adversarial active clients. The results also suggest the importance of a reasonable pricing plan. Furthermore, the random modification attack poses a more significant threat compared to the label flipping attack, making it particularly difficult for non-incentivized FedAvg to converge. More extensive results are included in Appendix D. Remark 6 (Random sampling (RS) for noninformative scenarios). We show that if the system is noninformative, RS can be close to the optimal selection, following Section 2.4.2. Suppose the information from participant $m$ is the mean and variance of $x_m \in \mathbb{R}$, denoted by $\mu_m, \sigma^2$ for $m \in \mathbb{I}_p$. A large $\sigma^2$ means less information. The result below shows RS is close to the optimal for large $\sigma$. Proposition 2. Assume the gain is defined by $G(x) \triangleq -\mathbb{E}(x - \mu)^2$, where $\mu$ represents the underlying parameter of interest, and the participants’ weights $\zeta_m$’s are the same. Assume that $\sigma^2/(\|\mathbb{I}_p\| \cdot \max_{m \in \mathbb{I}_p} (\mu_m - \mu)^2) \to \infty$ as $\|\mathbb{I}_p\| \to \infty$. Then, we have $U(q)/U(q^*) \to_p 1$ as $\|\mathbb{I}_p\| \to \infty$. 3.2 ICL FOR ASSISTED LEARNING Assisted learning (AL) (Xian et al., 2020b; Diao et al., 2022b; Wang et al., 2022) is a decentralized learning framework that allows organizations to autonomously improve their learning quality within only a few assistance rounds and without sharing local models, objectives, or data. Unlike FL schemes, AL is 1) decentralized in that there is no global model to be shared or synchronized among all the entities in the training and 2) assistance-persistent in the sense that an entity still needs the output from other entities in the prediction stage. From the perspective of incentivized collaboration, the above naturally leads to two considerations with complementary insights into the pricing and selection plans compared with FL in Section 3.1. - **Consideration 1: Autonomous incentive design without a coordinator** Since each entity can initiate and terminate assistance, it is natural to consider a coordinator-free scenario, where entities can autonomously reach a consensus on collaboration partners based on their pricing plans. - **Consideration 2: Limited information for incentive design** In AL, an entity aims to seek assistance to enhance prediction performance without sharing proprietary local models. Thus, we suppose the communicated information for collaboration is limited to gains ($z$) rather than outcomes ($x$). To put it into perspective, we will study a three-entity setting in Section 3.2.1 to develop insights into the incentive that allows for a consensus on collaboration in Stage 1 (Fig. 1). In Section 3.2.2, we will further study a multi-entity setting where multiple less-competitive participants are allowed to enter Stage 2 to enjoy the collaboration gain, but they will not compete for being active participants. 3.2.1 CONSENSUS OF COMPETING CANDIDATES In this section, we study three candidate entities, Alice, Bob, and Carol, and suppose each candidate aims to maximize its profit. Suppose a collaboration round can only consist of two entities. Then, the collaboration will only occur when two out of the three, say Alice and Bob, can maximally assist each other. From Alice’s perspective, Carol is less competitive than Bob, and meanwhile, from Bob’s perspective, Carol is less competitive than Alice. We will provide conditions to reach a consensus. Suppose each entity has its own payment plan: entity $i$ will pay a price $p_i(z - z_i)$ for any given collaboration gain $z$ and its local gain $z_i$, for all $i \in [M]$. So, if entities $i$ and $j$ collaborate, the actual price $i$ will pay is $p_i(z - z_i) - p_j(z - z_j)$. The goal of each entity is to maximize the expected gain-converted profit minus the participation cost, namely the quantity in (2). For simplicity, suppose $p_i(\Delta z) = c_i \cdot \Delta z$ and $U(z) = u \cdot z$. Let $\mu_{i,j} \triangleq \mathbb{E}(U(z_{i,j}))$ denote the expected income of the collaboration gain formed by entities $i$ and $j$, and $\mu_{j \leftarrow i} \triangleq \mathbb{E}(U(z_{i,j}) - U(Z_j))$ the additional gain brought by $i$ to $j$. With this setup, we have the following result. Theorem 3. A consensus on collaboration exists if and only if there exist entities 1 and 2 satisfying \[ (u - c_1)\mu_{1,2} + c_2\mu_{2,-1} \geq (u - c_1)\mu_{1,3} + c_3\mu_{3,-1}, \\ (u - c_2)\mu_{2,1} + c_1\mu_{1,-2} \geq (u - c_2)\mu_{2,3} + c_3\mu_{3,-2}. \] Inequality (14) can be interpreted that the total gain of entity 1 received from entity 2, which consists of the collaboration-generated gain \((u - c_1)\mu_{1,2}\) and the pricing-based gain \(c_2\mu_{2,-1}\), is no larger than that from entity 3. To show our pricing plan can promote mutually beneficial collaboration, we apply ICL to the parallel assisted learning (PAL) framework (Wang et al., 2022) to develop an incentivized PAL. The detailed algorithms and experimental studies are included in Appendix E. We show an experiment using real-world clinical dataset MIMIC (Johnson et al., 2016). Suppose three divisions, denoted by Entity 1, 2, and 3, collect heterogeneous features from the same patients for different tasks: predict the heart rate, the systolic blood pressure, and the length of stay. In Setting \(i\), Entity \(i\) does not provide an incentive while the other two entities do. The results in Fig. 3 show that in Setting 1, entity 1 will not gain much as it does not provide incentives; in other two settings, it gains from collaborating with the entity with mutual benefits. 3.2.2 Consensus of Non-Competing Candidates We will further study a multi-entity setting where multiple less-competitive participants are allowed to enjoy the collaboration gain, but they will not compete for being active participants. The objective is to develop an incentive to maximize the collaboration gain, namely Objective (7), that will eventually benefit all the participants. For ease of presentation, we will consider only one active participant (namely \(|\mathbb{I}_A| = 1\)). In general, we may regard a set of participants as one “mega” participant. Following the above two considerations of AL, we will first study the following setup. Suppose \(K\) entities decide to participate in a collaboration, where one of them will be selected to realize the collaboration gain. For example, if participant \(m\) is active, it will realize a model gain \(z_m \sim G(x_m)\), where \(x_m \sim P_m\) is the potential outcome of participant \(m\). Let \(P_m\) denote the distribution of \(z_m\) induced by \(P_m\) for \(m \in [K]\) and suppose they are the shared information among participants. In line with Consideration 1, the system is set to have a zero balance, namely \(\sum_{m \in [K]} c_m = 0\). Following the notation in (1), we consider the following particular pricing plan: \[ P : z \mapsto C(z)\mathbb{1}_{j \notin \mathbb{I}_A} - (K - 1)C(z)\mathbb{1}_{j \in \mathbb{I}_A}, \quad j \in \mathbb{I}_P, \] where \(C\) is non-decreasing so that a larger gain is associated with a larger cost. In other words, each of the \(K - 1\) non-active participants will pay a cost of \(C(z)\), which depends on the realized \(z\), to the active participant. Then, the condition to reach a consensus among participants is the existence of a participant, say participant 1, such that when it is active, (i) the collaboration gain is maximized, and (ii) every participant sees that its individual profit is maximized, as formalized below. Theorem 4. Assume \(U\) is a pre-specified non-decreasing function. Consider the pricing plan in (15) where \(C\) can be any non-negative and non-decreasing function. Let \(\mu_m \triangleq \mathbb{E}_{P_m}\{U(z_m)\}\) denote the expected gain of participant \(m\) when it is active, \(m \in [K]\). The necessary and sufficient condition for reaching a pricing consensus is the existence of a participant, say participant 1, that satisfies \[ \mu_1 - u_j \geq \mathbb{E}_{P_1}\{C(z_1)\} + \mathbb{E}_{P_j}\{(K - 1)C(z_j)\} \quad \text{for all } j \neq 1. \] If we further assume the linearity \(U(z) \triangleq u \cdot z\) and \(C(z) = c \cdot z\), this inequality becomes \(c \leq \min_{j \neq 1} u \cdot (\mu_1 - \mu_j)/(\mu_1 + (K - 1)\mu_j)\). 4 Conclusion As the demand for learning tasks continues to grow, collaboration among entities becomes increasingly important for enhancing their performance. We proposed an ICL framework to study how entities can be properly incentivized to create common benefits. While our work provides an angle toward incentivizing collaborative learning, there are several limitations worth further investigation. For instance, future studies could examine the functional forms of pricing plans, use cases to promote model security (Wang et al., 2021; Xian et al., 2023) and privacy (Dwork et al., 2006; Ding & Ding, 2022), and potential trade-offs between collaboration and competition in collaborative learning contexts. The Appendix contains more details on ICL and related works, ethics discussion, concrete use cases and experiments of FL, AL, and MAB, and technical proofs. REFERENCES Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. In *International Conference on Artificial Intelligence and Statistics*, pp. 2938–2948. PMLR, 2020. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. An actor-critic algorithm for sequence prediction. In *Proc. International Conference on Learning Representations*, 2017. Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. Analyzing federated learning through an adversarial lens. In *International Conference on Machine Learning*, pp. 634–643. PMLR, 2019. Avrim Blum, Nika Haghtalab, Richard Lanas Phillips, and Han Shao. One for one, or all for all: Equilibria and optimality of collaboration in federated learning. In *International Conference on Machine Learning*, pp. 1005–1014. PMLR, 2021. Gino Brunner, Oliver Richter, Yuyi Wang, and Roger Wattenhofer. Teaching a machine to read maps with deep reinforcement learning. In *Proc. Association for the Advancement of Artificial Intelligence (AAAI)*, 11 2017. Bogdan Carbunar and Mahesh Tripunitara. Fair payments for outsourced computations. In *2010 7th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks (SECON)*, pp. 1–9. IEEE, 2010. Cheng Chen, Jiawei Zhang, Jie Ding, and Yi Zhou. Assisted unsupervised domain adaptation. *Proc. International Symposium on Information Theory*, 2023a. Cheng Chen, Jiaying Zhou, Jie Ding, and Yi Zhou. Assisted learning for organizations with limited data. *Transactions on Machine Learning Research*, 2023b. Huili Chen, Jie Ding, Eric Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, and Tao Zhang. Self-aware personalized federated learning. *Conference on Neural Information Processing Systems*, 2022. Wenlin Chen, Samuel Horvath, and Peter Richtarik. Optimal client sampling for federated learning. *arXiv preprint arXiv:2010.13723*, 2020. Yae Jee Cho, Divyansh Jhunjhunwala, Tian Li, Virginia Smith, and Gauri Joshi. To federate or not to federate: Incentivizing client participation in federated learning. *arXiv preprint arXiv:2205.14840*, 2022. Luke N Darlow, Elliot J Crowley, Antreas Antoniou, and Amos J Storkey. Cinic-10 is not imagenet or cifar-10. *arXiv preprint arXiv:1810.03505*, 2018. M. P. Deisenroth, G. Neumann, and J. Peters. *A Survey on Policy Search for Robotics*. 2013. Enmao Diao, Jie Ding, and Vahid Tarokh. HeteroFL: Computation and communication efficient federated learning for heterogeneous clients. In *Prof. International Conference on Learning Representations*, 2021a. Enmao Diao, Jie Ding, and Vahid Tarokh. HeteroFL: Computation and communication efficient federated learning for heterogeneous clients. In *International Conference on Learning Representations*, 2021b. Enmao Diao, Jie Ding, and Vahid Tarokh. GAL: Gradient assisted learning for decentralized multi-organization collaborations. *Advances in Neural Information Processing Systems*, 35:11854–11868, 2022a. Enmao Diao, Jie Ding, and Vahid Tarokh. GAL: Gradient assisted learning for decentralized multi-organization collaborations. In *Advances in Neural Information Processing Systems*, 2022b.
ZBEs9CJiWs
Since the proposed method relies on off-the-shelf LLMs, it is important to investigate the outcome of their generations in different cultures or on people with different personalities. In other word, relying directly on LLMs that have been trained on data with specific social norms in the background should have different outcomes which urges a comprehensive study in various cultures/domains.
IMPROVING INTERPERSONAL COMMUNICATION BY SIMULATING AUDIENCES WITH LANGUAGE MODELS Anonymous authors Paper under double-blind review ABSTRACT How do we communicate with others to achieve our goals? We use our prior experience or advice from others, or construct a candidate utterance by predicting how it will be received. However, our experiences are limited and biased, and reasoning about potential outcomes can be difficult and cognitively challenging. In this paper, we explore how we can leverage Large Language Model (LLM) simulations to help us communicate better. We propose the Explore-Generate-Simulate (EGS) framework, which takes as input any scenario where an individual is communicating to an audience with a goal they want to achieve. EGS (1) explores the solution space by producing a diverse set of advice relevant to the scenario, (2) generates communication candidates conditioned on subsets of the advice, and (3) simulates the reactions from various audiences to determine both the best candidate and advice to use. We evaluate the framework on eight scenarios spanning the ten fundamental processes of interpersonal communication. For each scenario, we collect a dataset of human evaluations across candidates and baselines, and showcase that our framework’s chosen candidate is significantly preferred over popular generation mechanisms including Chain-of-Thought. Using a multi-level model, we find that audience simulations achieve significant agreement with human raters. Finally, we demonstrate the generality of our framework by applying it to real-world scenarios described by users on web forums. Through evaluations and demonstrations, we show that EGS enhances the effectiveness and outcomes of goal-oriented communication across a variety of situations, thus opening up new possibilities for the application of large language models in revolutionizing communication and decision-making processes. 1 INTRODUCTION We communicate with others in order to achieve our goals: to make friends, to accomplish tasks, or simply to convey our intentions (Grice [1975] Sperber & Wilson [1986]). However, it can be hard to find the right words to achieve those goals. Consider a scenario where you are trying to get a discount on an item by haggling with its vendor. There are many strategies that you could use to gain an edge, including complimenting the item, offering to buy multiple items for a discount, or even describing your financial situation and asking them to take pity. With so many potential options, it’s difficult to correctly decide which strategy to choose. This problem is not confined to bargaining—everyday communication requires us to make choices about what approaches to adopt, whether we are making friends, impressing others, or navigating romantic conflicts. Given a communication scenario, how do we decide which strategies to employ? Often, we rely on heuristics such as our prior experience (Schacter et al. [2007]) or on advice we receive from others (Yaniv [2004]). When we have more time to make careful decisions, we may even play out possible candidates in our minds, simulating the reaction of an imaginary listener and using their imagined reaction to guide our choice (Atance & O’Neill [2001]). This idea is formalized in the Rational Speech Act (RSA) model (Goodman & Frank [2016]), which explains people’s communication choices in terms of speakers simulating listeners as rational interpreters of possible candidate utterances. However, both our experiences and the advice of others are biased by the information we are exposed to, making our heuristics and simulations imperfect and resulting in suboptimal communication outcomes (Gilbert & Wilson [2007]). Moreover, reasoning about others’ potential reactions can be time-consuming and cognitively challenging (Gilbert et al. [1988]). Figure 1: Given a scenario and goal, EGS generates the best candidate message by simulating stakeholders using an LLM. It explores pieces of advice that might help, generates candidates conditioned on subsets of advice, and simulates audience members who evaluate the various candidates. Inspired by the newfound capacity of LLMs to simulate agents (Park et al., 2023), we propose the Explore-Generate-Simulate (EGS) framework, which supports people in exploring communication strategies and developing message candidates while offloading the cognitively challenging simulation of audience reactions. More precisely, given an arbitrary communication scenario, EGS first explores the space of possible responses by using an LLM to produce both normal and creatively unorthodox advice relevant to the scenario. Next, it generates communication candidates by conditioning an LLM on various subsets of the advice. Finally, it simulates the reception of each candidate by having the LLM take on the perspectives of key audiences. Using these simulations, we can determine which candidates and advice are best suited for achieving the communicator’s goal. By construction, our framework also lets us study whether LLMs, conditioned on the scenario and audience description, can effectively simulate audience reactions. To evaluate this framework, we construct eight diverse scenarios that span the ten fundamental processes of interpersonal communication (Berger & Roloff, 2019) and a variety of communication modalities, relationships, and settings (see Appendix A.1 and A.2). The scenarios include an airline representative speaking to the press after a plane crash, a college student trying to write their profile on a dating app, and even everyday situations like a barista interacting with a customer. We collect human judgments on the effectiveness of each candidate message and compare how EGS performs against non-simulation baseline methods, including GPT-4 zero-shot and Chain-of-Thought (CoT). Finally, we analyze the agreement between humans and simulated audiences using real world scenarios drawn from the Stanford Human Preferences (SHP) dataset (Ethayarajh et al., 2022). In both evaluations, our approach convincingly outperforms their respective baselines. We also find significant agreement between simulated preferences and human scores across three analyses. We provide qualitative examples, showing LLM-generated messages at each step and highlighting our design choices. Viewing LLMs as a library of shared experiences, our simulation approach draws on this library to integrate individual experiences to ultimately help people communicate better. 2 RELATED WORK For an extended related work discussion on prompt engineering and human preferences, we refer the reader to Appendix D. Interpersonal relationships. Research in social psychology views interpersonal communication from the perspective of ten fundamental processes that underlie social interaction (Berger & Roloff, 2019). These processes may be present regardless of the social context within which communication occurs (see Appendix A.1 for a list). Grice identified a set of maxims surrounding the problem of how a cooperative speaker should choose what to say, such as truthfulness or relevance (Grice, 1957; 1975; 1989). Classical formal models of communication view cooperative communication as information transfer between speaker and listener (de Saussure, 1916; Lewis, 1969; Shannon, 1948), with a focus on aligning the true state between agents as the goal of communication (Stalnaker, 1978). The Rational Speech Act (RSA) framework (Frank & Goodman, 2012; Goodman & Frank, 2016) draws on both, modeling informative speakers as aiming to reduce the listener’s uncertainty over the true world state, assuming listeners make rational inferences from the utterances they hear. Simulations in the human mind. The act of projecting oneself into the future to pre-experience an event is formalized in cognitive science as “episodic future thinking” (Alantce & O’Neill, 2001). At the neuropsychological level, brain regions traditionally associated with memory are similarly engaged when people imagine future experiences (Schacter et al., 2007). Szpunar (2010) covers the close relation between episodic future thought and the ability to remember personal episodes from one’s past. In cognitive psychology, Klein et al. (2010) find evidence that a goal of long-term memory is to store information about the past to plan for the future. Thus, if we consider LLMs as encoding an aggregation of personal experiences across a large subset of human society, they may also have the capacity to simulate episodic future thought. Furthermore, as LLMs are theoretically able to take experiences and infer and represent properties of an agent likely to have had those experiences (Andreas, 2022), they may be able to simulate episodic future thought from the perspective of an average agent with a particular set of properties, leading to the possibility of realistic LLM-simulated audiences. Agent simulations. LLM simulations are seen as an opportunity to expand research in computational social science (Ziems et al., 2023). Although rule-based simulations have traditionally been used to study social phenomena, they are limited in expressivity (Schelling, 1971; Easley & Kleinberg, 2010). LLMs can potentially simulate more complex interactions that are harder to codify. They can also be explicitly conditioned to simulate individuals with goals and objectives (Jones & Steinhardt, 2022; Koralus & Wang-Maścianica, 2023; Liu & Shah, 2023). This was recently used to simulate social computing systems (Park et al., 2022), rollout their members’ interactions (Park et al., 2023), generate public opinion (Chu et al., 2023), and even produce individualized subjective experience descriptions (Argyle et al., 2023). However, there are uncertainties in using LLM simulations due to their unpredictability (Salganik et al., 2006). LLMs exhibit higher homogeneity of opinions than humans (Argyle et al., 2023; Santurkar et al., 2023), and combining LLMs with human samples is essential to avoid algorithmic monoculture, leading to collapsed generations that constitute only a limited set of perspectives (Kleinberg & Raghavan, 2021; Bommasani et al., 2021). 3 THE EXPLORE-GENERATE-SIMULATE (EGS) FRAMEWORK The EGS framework takes as input an arbitrary scenario where an individual is communicating to an audience with a goal they want to achieve. As output, it suggests a candidate message and set of advice to help the individual achieve their goal. Example. An example input may be: A company CEO is announcing their new smartphone product, the jPhone, and they want to maximize its sales. The jPhone has the following features . . . Right now, they are about to give a quick 30-second presentation about the jPhone, broadcasted to major television channels. Here, the individual is the CEO, and their goal is to maximize the sales of the jPhone. Given the input, we perform three steps: In the Explore step, EGS asks an LLM to generate a diverse set of advice for the scenario. For example, it might suggest: Highlight the fast processing speed and seamless, lag-free user experience. In the Generate step, for each set of advice, EGS asks an LLM to generate candidates for the communication. For the advice above, it might generate: Introducing the jPhone. With unbeatable processing speeds to rival even the finest on the market . . . In the Simulate step, EGS first asks an LLM to generate a list of stakeholder audience profiles, each with a unique description and perspective. For the above scenario, a stakeholder might be media outlets, and their perspective: Your job is to listen to the CEO’s presentation, understand the key features and selling points of the smartphone, and relay the information to the public . . . EGS uses simulated audience perspectives to evaluate each candidate, with the metric as the likelihood and magnitude to which the candidate achieves the communicator’s goal. Why is default LLM prompting insufficient? When using current LLM methods such as CoT, generated candidates often lack diversity in their wording and approach. Furthermore, the communicator might lack the perspectives of the important audiences in order to accurately evaluate one message candidate against another. Additionally, as we broaden the search space with many potential candidates, the user can get easily overwhelmed when deciding between the options, requiring us to build a more scalable method of making judgements between potential message candidates. Our EGS framework addresses each of these challenges, and it is modular such that each step can be implemented flexibly based on one’s specific scenario. We now summarize each EGS component. 3.1 Explore The purpose of the Explore step is to expand the space of possible candidate generations. This step generates a list of distinct pieces of advice to later condition the candidate generation upon. We follow existing literature in Social Psychology, which finds that people recall useful advice (Yaniv, 2004) or prior experiences (Schacter et al., 2007) when considering their next action. Similarly, Explore generates relevant pieces of advice that will be useful for the next stage of the framework. Additionally, EGS prompts the LLM to generate “unorthodox but potentially helpful” advice to increase the diversity of candidates. In the example above, GPT-4 generates the unorthodox advice: In your 30-second presentation, declare a limited time 24-hour sale where the first 100 customers get the phone at an additional 50% off, creating hype and urgency to buy immediately. We find that unorthodox advice generated by GPT-4 are clever and creative (see Section 4), while also improving the downstream candidates in 4 of 8 scenarios (see Appendix B.8). We theorize that this is because LLMs are exposed to a large trove of human experiences, allowing it to catch successful strategies that are only employed by smaller communities or even just a few individuals. 3.2 Generate The Generate step seeks to create reasonable candidates for the communication guided by advice from the Explore step. EGS forms combinatorial subsets of the advice generated from the previous step, and each subset is used to generate a few candidate messages. Following Park et al. (2023), we utilize the “inner voice” of the communicator to condition generated candidates on their assigned advice set, as it makes the LLM more likely to treat the statement as a directive. Using this, we frame the advice set as what the individual remembers in the moment: You remember a few pieces of advice: . . . You decide to focus on using these pieces of advice during your presentation. Conditioning on a combinatorial spread of advice further expands the explored solution space. This allows each candidate to incorporate orthogonal advice concepts, which we find leads to better performance through an ablation (see Appendix B.10). We also conduct a preliminary investigation into generating candidates conditioned on specific audiences in addition to advice, and find that they are not preferred over those generated with only advice (see Appendix B.12). A key feature that enables combinatorial advice is the scalability of EGS. Using LLMs, we can generate and compare between candidates at much larger scale compared to human thought, allowing for more sophisticated explorations around our choices. We showcase the quality of the generated candidates through both a demonstration (Section 4) and quantitative analyses (Section 5). 3.3 Simulate The Simulate step consists of two parts. First, EGS generates a list of key audiences who have influence over the communicator’s goal, and constructs profile descriptions for each. Next, EGS simulates the reactions of these audiences to each candidate message, and aggregates the results to determine which candidate is best for achieving the goal of the communicator. For each audience, EGS asks the LLM to construct 1) a description of the scenario and reception of a candidate message from their point of view, 2) the appropriate question to ask for how their reaction to each candidate would directly impact the communicator’s goal, and 3) a weight for the audience’s relative importance. For the media outlets audience in the example, the question generated was: *In which scenario would you be more likely to give more media coverage and promotion towards the jPhone?* We aggregate audience evaluations and get the best candidate and advice using a simple weighted sum. For details on the audience generation and examples, please see Appendix A.5. Motivated by literature on cognitive choices (Gates et al., 2020), we provide other audience aggregation options and a brief analysis in Appendix B.13. We consider aggregation via generating a new candidate using the top-performing candidates and their comparisons in Appendix B.14. Since candidates can be very close in quality, LLMs can lack granularity when giving ratings to individual candidates (Qin et al., 2023). Thus, we ask simulated audiences to use pairwise comparisons instead. We provide simulated audiences with two scenarios, one representing each candidate in the pairwise comparison (see Appendix A.5), and ask it to reason about which is better before providing an outcome \( o \in \{“prefer scenario 1”, “prefer scenario 2”, “tie”\} \). Once we have outcomes for each audience, we aggregate scores across audiences to get the best candidate \( c^* \). We perform an ablation to show that pairwise comparisons selects high-performing candidates in Appendix B.6. \[ c^* = \max_c \sum_{c' \neq c} \text{compare}(c, c') \\ \text{compare}(c, c') = \begin{cases} 1 & \text{if “prefer } c” \\ 0.5 & \text{if “tie”} \\ 0 & \text{if “prefer } c’” \end{cases} \] 4 Demonstrations Explore In the Explore step, we generate three normal advice and one unorthodox advice. We observe that each advice takes into account the young man’s true impression, and attempts to bridge the gap between his impression and the date’s expectation. Generally, we find that GPT-4 avoids the use of lies, which we believe is an important value when improving interpersonal communication. The unorthodox advice provides a unique and unexpected strategy for the young man, and is natural to implement. In general, we found that unorthodox advice are both reasonable and creative, surpassing our expectations and demonstrating the value that EGS can bring into everyday situations. Generate In the Generate step, we have two candidates each generated with two pieces of advice. The first candidate uses normal advice 1 in “brings out the sparkle in your eyes”, and the unorthodox advice in “I can see how happy it makes you feel”. In general, GPT-4 often adapts or embellishes advice, such as adjusting “color” to “sparkle” to presumably account for the black color of the jacket. The second candidate interleaves the two advice. It starts off with “That’s an edgy look for you” from advice 2, moves to “can be a bit tricky to pull off” from advice 1, and returns with “I’m having a blast seeing you try out new styles”. Generally, we observe that the Generate step is flexible in how it incorporates multiple advice, seamlessly connecting them with each other and with the scenario. Simulate We provide a simulated comparison between the two candidates shown. For Scenario 1, it starts with a summary: “straightforward compliment, focusing on the effect the jacket has on [...] my eyes, and [...] my happiness”. Then, it moves to what it means to them: “He acknowledges my feeling of joy”. Lastly, this forms reflection about him: “It shows he’s paying close attention to me not only at visual but also at emotional level”. Hierarchical reasoning is common in the analyses, allowing the simulated comparisons to consider many facets of the communication candidates. Table 1: EGS with GPT-4 on the “White Lie During Date” scenario. The Simulate step uses the date’s perspective to compare between the two candidates shown. In the comparison, the left candidate in the Generate step is Scenario 1 and the right candidate is Scenario 2. **Scenario: White Lie During Date** A teenage guy is currently out on a date with a girl at the mall. He is attracted to the girl and wants to make the date as enjoyable for her as possible. They are currently in a clothing store and he is waiting while his date is trying out an outfit in the changing room. She comes out of the changing room looking very pleased while wearing a black denim jacket, strikes a pose, and asks, “what do you think?”. Secretly, he thinks the jacket doesn’t look good on her at all, but decides he should try to compliment it anyway. Right now, he is wondering what to say. **Explore** | Normal advice 1 | Normal advice 2 | Normal advice 3 | Unorthodox advice | |----------------|----------------|----------------|------------------| | Share an affirmation that re-establishes your interest in her, such as “That style can be challenging to pull off, but you certainly can make anything look good.” | Provide a suggestive compliment on the jacket, for instance, “Wow! That’s an edgy look for you. It’s fun seeing you try out new styles.” | Compliment elements surrounding the jacket, like saying, “That jacket really brings out the color in your eyes,” so you avoid commenting on the jacket’s overall look yet keep the comment positive and about her. | Compliment the joy the jacket brings her, rather than the jacket itself. | **Generate** | Normal advice 3 + Unorthodox advice | Normal advice 1 + Normal advice 2 | |-------------------------------------|-----------------------------------| | Wow, I love how that jacket brings out the sparkle in your eyes, and I can see how happy it makes you feel. It certainly adds a unique flair to your whole look. | Wow! That’s an edgy look for you, it really shows off your daring side. You know, that style can be a bit tricky to pull off, but it seems like there’s nothing you can’t make look good. I’m having a blast seeing you try out new styles. | **Simulate** The compliment and feedback I prefer will largely depend on my personality and what I value from my date’s response. In Scenario 1, he gave a straightforward compliment, focusing on the effect the jacket has on my physical appearance, particularly my eyes, and how it elevifies my happiness. He acknowledges my feeling of joy and captures the personalized aspect of my style, which I really appreciate. It shows he’s paying close attention to me not only at visual but also at emotional level. In Scenario 2, he described how the jacket contributes to my overall style, suggesting I’m daring and edgy. He admires how I pull off tricky styles, which indicates he thinks I have a good sense of fashion. However, the phrase “it seems like there’s nothing you can’t make look good” might come off as a generic compliment. But the excitement he’s showing in wanting to continue watching me try new styles implies he’s also interested in sharing the shopping experience together. Considering all points, while both scenarios provide complimentary feedback, I prefer Scenario 1. It resonates well as he not only notices the physical aspects but also the happiness it brings me, which very well shows he cares about my feelings. **Answer:** Scenario 1 For Scenario 2, the simulated reaction judges “there’s nothing you can’t make look good” as generic, which is an interpretation that is very reasonable but also not universal. In cases like this where the audience is underspecified and their reaction is unpredictable, we find that the LLM often adopts a reaction that represents a majority of the audience demographic. We believe this is a large part of why Simulate is able to achieve a high agreement with human raters and online users. ## 5 Human Evaluations ### 5.1 Data Collection For each scenario, we collect human ratings on all candidates from the Generate step, two baselines (GPT-4 zero-shot and zero-shot CoT), and an ablation altering the search space of the Explore step. In EGS, we use three normal and one unorthodox advice. In a pilot, we found that 3 pieces of advice performed worse than using 1 or 2 (normalized scores 0.47 vs. 0.55 and 0.56), and so we limit advice sets to two pieces of advice max (see Appendix B.9), yielding 10 advice sets. Following Lu & Shah (2023), we generate three candidates for each set for a total of 30 candidates per scenario. **Baselines.** In GPT-4 zero-shot, we provide the scenario, communicator, and action and generate an utterance directly. In GPT-4 zero-shot CoT (Wei et al., 2022), we ask the model to reason about Table 2: The best candidate message selected by EGS outperforms GPT-4 zero-shot in human ratings across all constructed scenarios, and outperforms GPT-4 with CoT in five scenarios and a subset of a sixth scenario. *, **, and *** denote $p < 0.05$, $p < 0.01$, and $p < 0.001$ when compared to EGS. | Scenario | GPT-4 zero-shot | Chain-of-Thought | EGS (ours) | |---------------------|-----------------|------------------|------------| | Plane Crash | 6.83** | 5.98*** | 7.95 | | Product Launch | 5.73** | 5.95* | 7.05 | | Bargaining | 4.68 | 5.98 | 5.85 | | Barista | 5.58 | 5.78 | 5.40 | | Sharing Secrets | 3.67*** | 4.17** | 5.55 | | Dating App | 5.42 | 6.48** | 5.05 | | White Lie During Date | 6.12 | 6.02 | 6.70 | | Marriage Argument | 6.78* | 6.70* | 7.80 | | **Average** | **5.60*** | **5.88** | **6.42** | the scenario before providing what it would say (see Appendix A.7.2). We provide justification for why we select these baselines in Appendix A.7.1. For each baseline, we generate three candidates. For the Explore ablation, we prompt the LLM to generate advice that are encouraging or irrelevant rather than conceptual. We use three pieces of advice each, resulting in 18 candidates per scenario. Ratings. Human ratings were on a 0-10 Likert scale, with (0) highly negative, (5) relatively neutral, and (10) highly positive impact on the communicator’s goal. For the scenario in (Section 4): (0) “I think his comment would make me enjoy the date a lot less.” (5) “I think his comment would not affect how much I am enjoying the date.” (10) “I think his comment would make me enjoy the date a lot more.” Participants. Our dataset comprises 12180 human judgments from $N = 652$ UK participants crowdsourced via Prolific. All participants provided informed consent prior to participation in accordance with an approved institutional review board protocol, and were paid 12 USD per hour. We collected 20 judgments per candidate and baseline, and each participant provided $\leq 20$ judgments. This yielded an excellent average inter-rater reliability of $r = .82$. More details are in Appendix A.8. 5.2 Result: EGS outperforms GPT-4 Zero-shot and CoT Averaging across all scenarios, the average human ratings of EGS outperforms GPT-4 zero-shot by 0.82 (14.6%) and CoT by 0.54 (9.2%), both statistically significant at $\alpha = 0.01$ using bootstrapping with 10000 samples. Separating by scenario, EGS outperforms GPT-4 zero-shot in all scenarios, and CoT in five scenarios (Table 2), of which four each are statistically significant at $\alpha = 0.05$. All outputs from EGS surpassed a mean score of 5, indicating that they all had a positive impact on the communicator’s goal, whereas this was not the case for either baseline. In the Bargaining scenario, we find a large discrepancy between human and GPT-4 preferences on the unorthodox advice (see Appendix B.1 for investigation). After reducing the Explore space by removing the unorthodox advice, EGS outperforms CoT by a large margin (5.85 $\rightarrow$ 6.60 vs. 5.98). 5.3 Result: Significant agreement between human scores & GPT-4 comparisons Multilevel model across scenarios. Using a multilevel model, we analyze the agreement between GPT-4 and human raters by assigning scenario as a random effect. We measure if candidates preferred in pairwise comparisons by the combined stakeholders had higher mean scores from human raters than those less preferred. For more details, please see Appendix B.2. The results revealed a significant fixed effect for the pairwise judgements on the score provided by human raters ($\text{coef} = 0.427, p = 0.041$), demonstrating significant agreement between GPT-4 and human scores across scenarios. This effect differed across scenarios, indicating the multilevel model’s appropriateness in taking into account the hierarchical nature of our data. Table 3: We find significant agreement across GPT-4 and human ratings in five scenarios and a modified sixth scenario. Preferred and less preferred values are mean scores across all GPT pairwise evaluations, with standard errors of the mean. *, **, and *** denote \( p < 0.05, 0.01 \) and 0.001 respectively. Agreement is a percentage agreement across pairs. | Scenario | Preferred | Less Preferred | Agreement | |---------------------------|-----------------|------------------|-----------| | Plane Crash | 6.19 ± 0.03*** | 5.86 ± 0.03 | 0.63 | | Product Launch | 6.20 ± 0.03*** | 5.87 ± 0.03 | 0.67 | | Bargaining | 5.90 ± 0.03 | 5.99 ± 0.02* | 0.53 | | Bargaining (-unorthodox advice) | 6.35 ± 0.04*** | 6.06 ± 0.03 | 0.69 | | Barista | 4.66 ± 0.08*** | 3.53 ± 0.09 | 0.64 | | Sharing Secrets | 5.72 ± 0.03*** | 4.99 ± 0.04 | 0.78 | | Dating App | 5.24 ± 0.03 | 5.44 ± 0.03*** | 0.41 | | White Lie During Date | 6.70 ± 0.03 | 6.81 ± 0.03** | 0.43 | | Marriage Argument | 6.34 ± 0.04*** | 6.01 ± 0.03 | 0.65 | GPT-4 comparisons vs. human ratings per scenario. For each scenario, we conducted a paired samples t-test across the preferred and less preferred candidates of the pairwise comparisons, and find that the preferred have significantly higher scores in 5/8 scenarios with \( \alpha = 0.001 \) (see Table 3). Though this metric allows us to perform statistical tests, it is largely affected by easier comparisons which have a large disparity in scores, i.e., comparisons where one candidate is clearly better than the other. Thus, we follow with a percentage agreement analysis where each pair is weighted equally. Percentage agreement within individual scenarios. For each pair of candidates, we aggregate the pairwise comparisons made by audiences using a weighted sum, and compare the outcome with the mean human scores of each candidate to see if they match. Tied comparisons are labeled as a half-match. We divide the matching pairs by the total pairs to obtain percentage agreement. For a mathematical formulation and justification of why we choose this metric, please refer to Appendix B.4. In five scenarios and the modified bargaining scenario, we find agreement > 0.6 between human raters and GPT-4 (Table 3). In Appendix B.5, we do the same analysis with only non-borderline cases, and find the agreement increases accordingly, with four scenarios eventually reaching 0.8 agreement with moderate sample size with a high borderline threshold. 6 BROADER INTERNET USER SIMULATION We further evaluate EGS’s audience simulation component on a broader space of interaction using the Stanford Human Preferences (SHP; Ethayarajh et al., 2022) dataset. SHP contains 385K human preferences over responses to online forum posts across a wide variety of subject areas from cooking to legal advice, making it a robust test bed for the simulation of different audiences. Each entry contains a forum post, two comments from the discussion, and the number of upvotes each comment received from forum users. We provide two examples of the questions/scenarios in Tables 16 and 17. We evaluate three methods on their accuracy of predicting the comment with more upvotes. First, we use a CoT baseline, which prompts the LLM to reason about the post before predicting which comment has more upvotes. Next, we use two versions of Simulate, each with an audience that we define beforehand. EGS Redditor simulation (Default) takes the perspective of a Redditor browsing the forum before prompting it to reason about which comment it is more likely to upvote; EGS Redditor simulation (Funny) additionally specifies that the simulated Redditor is more likely to upvote funny and entertaining comments. Prompts and output examples can be found in Appendix C.1. Following the authors, we filter SHP data by a ratio threshold of 3, ensuring that the more preferred comment is nontrivially preferred in each pair. To reduce the cost of API access, we select 5 subreddits and randomly sample 100 test examples from each, resulting in a total of 500 evaluations. We observe that EGS Redditor simulation is equal or better than the CoT baseline (Table 4), suggesting that the LLM is able to make decisions more aligned with real users when explicitly prompted to simulate them. Directing the model to look for funny/entertaining comments can also significantly boost performance on more casual forums such as cooking and legal advice. Table 4: EGS conditioned on different audience prompts outperforms GPT-4 w/ CoT on user preferences. Legaladvice and askculinary are more casual, where a fun-seeking audience leads to higher performance, while default audience is more accurate on serious forums such as asksocialscience. | Domain | Chain-of-Thought | EGS Redditor Sim. (Default) | EGS Redditor Sim. (Funny) | |-----------------|------------------|-----------------------------|---------------------------| | legaladvice | 71.0 | 70.0 | **76.0** | | askculinary | 59.0 | 60.0 | **70.5** | | askhr | 72.0 | **76.0** | 72.5 | | eli5 | 74.0 | **76.0** | 71.5 | | asksocialscience| 77.8 | **79.4** | 61.9 | In domains such as asksocialscience where strict rules are enforced on the informativeness and sincerity of comments, the demographic of viewers match Redditor Simulation (Default) more, and the performance of EGS Redditor simulation (Funny) drops accordingly. We perform a more cohesive investigation into redditor personalities and how they affect performance in Appendix C.2. Our results further validate that the Simulate component can generalize to diverse scenarios, and demonstrates that its performance can be further boosted with a better understanding of the audience. 7 DISCUSSION We discuss benefits and limitations of EGS. In a more detailed discussion (Appendix E), we highlight extensions to user controls and multi-turn conversations, applications in counterfactual reasoning and human studies + RLHF, and the meta-level concepts of optimal simulation granularity, viewing LLMs as shared cultural experience, and broader impacts. 7.1 INTERPRETABLE EXPLANATIONS AND IMMEDIATELY AVAILABLE ALTERNATIVES A benefit of EGS is that it provides easily accessible and interpretable explanations. Each comparison between candidates contains detailed reasoning, so users can easily find explanations for why a candidate is preferred over another from the perspective of any audience. Combined, these form a collective explanation for how the framework chose the best candidate and advice. If a user dislikes EGS’s suggestion, they can access a list of alternatives that also did well in Simulate, or change the stakeholder weights and aggregation mechanism (Appendix B.13) based on their preferences. 7.2 SCALABLE EPISODIC FUTURE THINKING A key contribution of EGS is as a scalable alternative to episodic future thinking. In our experiments, each audience in each scenario performed 1305 pairwise comparisons. This took between 2-4 hours with one API key in Sep. 2023 (10000 tokens/min), averaging to one simulated comparison every 6-11s. While human simulations of the future are limited by the linear stream of consciousness over time, EGS simulations can be parallelized to achieve speeds much faster than human reasoning. 7.3 LIMITATIONS AND POTENTIAL NEGATIVE USE We acknowledge that EGS is dual-use and can be used to optimize communications detrimental to society. While EGS can simulate readers to improve emails, it can also increase the success of phishing. Though GPT-4 exhibits desirable qualities like avoiding lies, other LLMs may not meet the same standards, so EGS may select utterances that superficially improve communication through deceit or manipulation, or hallucinate personal details that do not exist (see Appendix B.15 for analysis). As more models are trained using RLHF to recognize and refuse queries with malicious intent (Touvron et al., 2023; Huang et al., 2023), the safety concerns of EGS will also improve. We also acknowledge that our scenarios fulfill common social roles, e.g., a young man struggling to compliment his female date. We encourage future work to analyze simulated audiences with more diverse backgrounds, as being less represented in training data may affect their simulation accuracy. EGS may also adopt inherent weaknesses of LLMs including social biases that may seep into decision making. Thus, we recommend users to validate outputs before putting them into use. REFERENCES Jacob Andreas. Language models as agent models. *arXiv preprint arXiv:2212.01681*, 2022. Lisa P Argyle, Ethan C Busby, Nancy Fulda, Joshua R Gubler, Christopher Rytting, and David Wingate. Out of one, many: Using language models to simulate human samples. *Political Analysis*, 31(3):337–351, 2023. Cristina M Atance and Daniela K O’Neill. Episodic future thinking. *Trends in cognitive sciences*, 5(12):533–539, 2001. Yuntao Bai et al. Training a helpful and harmless assistant with reinforcement learning from human feedback, 2022. Charles R. Berger and Michael E. Roloff. *An Integrated Approach to Communication Theory and Research, Third Edition*, chapter Interpersonal Communication. Taylor and Francis, 2019. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. *arXiv preprint arXiv:2108.07258*, 2021. William Brown. Some experimental results in the correlation of mental abilities 1. *British Journal of Psychology*, 3(3):296–322, 1910. Eric Chu, Jacob Andreas, Stephen Ansolabehere, and Deb Roy. Language models trained on media diets can predict public opinion. *arXiv preprint arXiv:2303.16779*, 2023. Ferdinand de Saussure. *Course in General Linguistics*. Columbia University Press, 1916. David Easley and Jon Kleinberg. *Networks, crowds, and markets: Reasoning about a highly connected world*. Cambridge University Press, 2010. Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. Understanding dataset difficulty with V-usable information. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), *Proceedings of the 39th International Conference on Machine Learning*, pp. 5988–6008, Jul 2022. Michael C Frank and Noah D Goodman. Predicting pragmatic reasoning in language games. *Science*, 334(6084):998–998, 2012. Vael Gates, Thomas L Griffiths, and Anca D Dragan. How to be helpful to multiple people at once. *Cognitive science*, 44(6):e12841, 2020. Daniel T Gilbert and Timothy D Wilson. Prospection: Experiencing the future. *Science*, 317(5843):1351–1354, 2007. Daniel T Gilbert, Brett W Pelham, and Douglas S Krull. On cognitive busyness: When person perceivers meet persons perceived. *Journal of personality and social psychology*, 54(5):733, 1988. Noah D Goodman and Michael C Frank. Pragmatic language interpretation as probabilistic inference. *Trends in Cognitive Sciences*, 20(11):818–829, 2016. Herbert P Grice. Meaning. *Philosophical Review*, 66(3):377–388, 1957. Herbert P Grice. Logic and conversation. In *Speech acts*, pp. 41–58. Brill, 1975. Herbert P Grice. *Studies in the Way of Words*. Harvard University Press, 1989. Yangsibo Huang, Samyak Gupta, Mengzhou Xia, Kai Li, and Danqi Chen. Catastrophic jailbreak of open-source llms via exploiting generation, 2023. Siddhartha Jain, Xiaofei Ma, Anoop Deoras, and Bing Xiang. Self-consistency for open-ended generations, 2023.
WYEEWScbaM
I am not certain how the optimization problem (2) is solved, the author only briefly described that adopt a greedy strategy in the manuscript. Based on my own knowledge, I guess the method is to solve for only one data tensor at each time and add it to the total dataset until the optimization error is lower than a certain value. Am I correct?
COMMUNICATION-EFFICIENT FEDERATED LEARNING VIA GRADIENT DISTILLATION Anonymous authors Paper under double-blind review ABSTRACT Federated learning revolutionizes collaborative model training across decentralized edge devices, ensuring privacy by avoiding direct data sharing. However, the frequent exchange of model updates introduces a significant communication overhead. The conventional FL process involves transmitting the differences in parameters between old and new models, resulting in redundant gradient communications due to the intricate interplay between model parameters and network architecture. Even minor adjustments to parameters necessitate the retransmission of entire models. In this paper, we introduce a groundbreaking concept known as gradient distillation, which decouples model parameters from network architecture, enabling the transmission of only essential information needed for synchronization. By leveraging gradient distillation, we approximate gradient disparities into a synthetic tensor sequence, allowing the recipient to reconstruct the sender’s intended model update. This approach eliminates the need to transmit the entire set of raw parameter differences, offering a highly promising solution for achieving greater communication efficiency while without significant accuracy degradation. Experimental results demonstrate that our approach achieves an unprecedented level of gradient compression performance, surpassing widely recognized baselines by an impressive margin of orders of magnitude. 1 INTRODUCTION Federated learning (FL) (McMahan et al., 2017; Shokri & Shmatikov, 2015) is a promising paradigm in machine learning that addresses the challenge of training models on decentralized data sources. Traditional machine learning approaches rely on centralized servers to collect and process all the data used for training. However, in many real-world scenarios, this centralized approach is impractical due to privacy concerns (Ching et al., 2018), regulatory constraints (GDPR, 2016), or the sheer volume of data generated at the edge (Zhou et al., 2019). FL emerged as a solution to this problem by allowing model training to occur directly on edge devices where the data are generated, without the need to transmit sensitive information to a central server. This approach has gained significant traction in recent years, particularly with the proliferation of mobile devices (Wang et al., 2020; Chen et al., 2023), IoT devices (Imteaj et al., 2021; Zhang et al., 2022), and edge computing (Wang et al., 2019; Nguyen et al., 2021). The decentralized nature of FL makes it well-suited for scenarios where data privacy and regulatory compliance are critical, such as in healthcare applications (Xu et al., 2021; Courtiol et al., 2019), financial transactions (Long et al., 2021; Kaplan, 1989), and other contexts where sensitive information is involved (Niu & Deng, 2022; Li et al., 2021). The inherent design of FL ensures that raw data remains securely stored on edge devices, rendering it inaccessible to the central server. This property is fundamental to the privacy-preserving aspect of FL. Instead of raw data, model updates are transmitted from participating devices to a central server, where they are aggregated to refine the global model (McMahan et al., 2017). Nevertheless, contemporary neural network models are characterized by a staggering number of parameters, often ranging in the millions or even billions. This is particularly exemplified by the immensely popular large language models (Brown et al., 2020), which can possess hundreds of billions of parameters. In many real-world scenarios, especially those involving decentralized data sources like mobile devices, transmitting such vast amounts of model parameters to a central server may be impractical or infeasible due to limitations in network bandwidth, high latency, or intermittent connectivity. The substantial communication cost acts as a barrier, impeding FL from effectively scaling the training process to accommodate a larger number of participants. Considering the communication-efficiency issue in FL is imperative for ensuring the practicality, scalability, and energy efficiency of the approach, especially in real-world applications where decentralized data sources are prevalent. Communication-efficient federated learning has gained significant attention in recent years (Aji & Heafield, 2017; Reiszadeh et al., 2020; Hönig et al., 2022; Liu et al., 2023). Existing approaches can be broadly categorized into several main strategies, each with its own set of limitations. The first strategy involves allowing devices to perform multiple local updates before transmitting their model updates to the central server, thereby reducing the frequency of communication rounds (McMahan et al., 2017; Haddadpour et al., 2019). While this approach reduces the overall transmission data amount, it may result in slower convergence and potential overfitting if not carefully managed. The second strategy focuses on compressing gradient information (Liu et al., 2023; Reiszadeh et al., 2020). This includes methods such as quantization (Hönig et al., 2022), which reduces the precision of gradient values to decrease the transmitted information volume, and sparsification (Aji & Heafield, 2017; Dai et al., 2022), which involves sending only a subset of gradients by retaining the most significant ones and setting others to zero. However, aggressive quantization and sparsification can lead to information loss and potentially hinder model accuracy. Additionally, pruning (Zhu et al., 2022; Wang et al., 2022) is employed to remove less influential weights or neurons from the model, effectively reducing the parameter count and, subsequently, the size of gradients. However, pruning methods require careful hyperparameter tuning and may result in model degradation if not applied judiciously. Another strategy focuses on uploading logits (Sattler et al., 2020) or dataset representations (Xiong et al., 2023). Instead of transmitting raw gradients, devices calculate and send logits (pre-softmax outputs) for their data samples to the central server. Alternatively, devices can convey aggregated representations of their datasets, such as centroids or other statistical summaries (Liu et al., 2022), which serve as a compact proxy for the raw gradients. These strategies offer diverse avenues to tackle communication overhead in FL, each with distinct trade-offs in terms of communication reduction, computational complexity, and potential impact on model performance. In this paper, we introduce a groundbreaking concept called gradient distillation, which exhibits unprecedented performance in FL communication compression, surpassing the renowned baseline FedAvg by a remarkable margin of $1904\times$ on benchmarking medical dataset PathMNIST. The conventional FL technique involves transmitting parameter differences between old and new models, resulting in redundant gradient communication due to the intricate relationship between model parameters and network architecture. Even minor parameter adjustments necessitate the retransmission of entire models. To tackle this issue, we propose decoupling model parameters from network architecture, enabling transmission of only essential information for synchronization. By employing gradient distillation, we approximate gradient disparities into a synthetic tensor sequence, allowing the recipient to reconstruct the sender’s intended model update. This parameter-structure decoupling leads to a significant reduction in gradient communication, as it avoids the need to transmit the entire set of raw parameter differences. Our method offers a highly promising solution for achieving more efficient FL. It greatly enhances communication efficiency, a critical concern in FL systems, ultimately improving the scalability and applicability of FL in real-world applications, particularly for scenarios with limited bandwidth or intermittent connectivity. The main contributions of this paper are summarized as follows: • We introduce Gradient Distillation, a novel approach of distilling the structural essence of gradients rather than directly compressing them, allowing for the transmission of only the indispensable information for model updates. • Experimental results demonstrate that gradient distillation reduces communication by orders of magnitude compared to baselines without significant accuracy degradation, enabling highly communication-efficient federated learning. • Our method provides a new perspective on overcoming communication bottlenecks in federated learning, facilitating the application of federated learning at scale on massively distributed devices with limited bandwidth. 2 METHODOLOGY In this paper, we consider federated learning across $N$ edge devices with heterogeneous bandwidth resources. For each device $i$, it possesses a private local dataset $\mathcal{D}_i = \{(x^i_j, y^i_j)\}_{j=1}^{m_i}$ drawn from a unique distribution $\mathcal{P}_i$ over $\mathcal{X} \times \mathcal{Y}$. Federated learning pursues collaboratively training a global model without directly accessing private data. To achieve this, edge devices periodically transmit their local model updates to a central server and receive the broadcasted global model. The ultimate objective is to obtain a global model that minimizes the risk across all private datasets, i.e., $$\arg \min_w L(w) \triangleq \frac{1}{N} \sum_{i=1}^{N} L_i(w),$$ where $w$ is the parameter of the model, $L_i(w) = \frac{1}{m} \sum_{j=1}^{m_i} \ell(w; (x_j, y_j))$ denotes the empirical risk with respect to device $i$, and $\ell$ represents the loss function. Our goal is to reduce the communication overhead incurred during this distributed training process. ### 2.1 Gradient Distillation **Motivation.** In each round of FL, the server broadcasts the aggregated global model from the preceding round to the client, while each client performs local training to generate an updated local model. Traditional methods involve transmitting parameter differences between the old and new models. However, this approach maintains a constant data transfer volume irrespective of changes in the numerical value of the parameter difference, as long as the network architecture remains fixed during the FL process. This introduces redundancy in gradient transmission. We claim that this issue arises due to the interdependence between model parameters and network architecture. Since network parameters and structure are intertwined, even slight parameter adjustments necessitate the retransmission of entire models. To counteract this, we propose decoupling model parameters from network architecture to transmit only the essential information required for synchronization. Specifically, we approximate the gradient disparities between models into a synthetic tensor sequence, employing a distillation-inspired concept. The recipient (i.e., the server side) can then reconstruct the intended model update of the sender (i.e., the client side) by taking a single descent step on these ordered tensors in conjunction with the previous model state. Through this parameter-structure decoupling, our approach transmits only the indispensable information for model updates, bypassing the need for transmitting the whole amount of raw parameter differences. This results in a substantial reduction in gradient communication. We refer to this approach as **gradient distillation**, which will be introduced in detail in the following. **Gradient Distillation.** For a model with parameters $w$, we define its difference between two periods $t_1$ and $t_2$ as $\Delta \omega(t_1, t_2) = \omega(t_1) - \omega(t_2)$. By approximating $\Delta \omega(t_1, t_2)$, we can update the model parameters at timestamp $t_2$ even when we only have access to the stale model $\omega(t_1)$. To achieve this, we synthesize an ordered tensor sequence $\zeta = \{(\hat{x}_j, \hat{y}_j)\}_{j=1}^{m}$, which is tailored to approximate the parameter difference using one-step gradient descent on $\omega(t_1)$. Our objective is to discover the shortest projected path between $\omega(t_1)$ and $\omega(t_2)$. To synthesize this ordered sequence, we minimize the error between the parameter difference and the sequence gradient on $\omega(t_1)$: $$\{(\hat{x}_j, \hat{y}_j)\}_{j=1}^{m} = \arg \min_{\{(x_j, y_j)\}_{j=1}^{m}} \left\| \text{optim} \left( \sum_{i=1}^{m} \frac{\partial \ell(\omega(x_j, y_j))}{\partial \omega} \right|_{\omega=\omega(t_1)} \right) - \Delta \omega(t_1, t_2) \right\|_2^2,$$ where $m$ represents the solved length of the optimal sample sequence, and $\text{optim}(g)$ denotes some operators performed on the gradient $g$ involved in the optimizer such as SGD or Adam. Solving this optimization problem directly poses a significant challenge. Therefore, we adopt a greedy approach to find an approximate solution, illustrated in Fig. 1 (c). The objective is to identify a minimal sequence of synthetic tensors, each associated with labels, that can effectively replicate the true gradient when employed to train the original model. This effectiveness is assessed by evaluating the disparity between the reproduced and actual gradients. Through a step-by-step, iterative process, we incrementally construct the optimal sample sequence. This greedy strategy offers a practical means of working towards the broader objective of obtaining a concise statistical representation of localized model updates. ### 2.2 Gradient Distillation based Federated Learning We now outline the workflow of the proposed Gradient Distillation based Federated Learning (FedGD) framework, encompassing the following key states: initialization, broadcasting, local training, uploading and global aggregation. – **Initialization:** The first $M$ rounds serve as the initialization stage, for which the training process adheres to the standard federated learning approach. The central server initiates the process by broadcasting an initial global model $\omega_g^{(0)}$ to all participating devices. Each device then conducts local training on this model, utilizing its individual private data. Upon completion of the local training, the devices upload their respective updated local models. Subsequently, the central server aggregates these models to generate a new global model. After $M$ rounds, the central server possesses the global model with parameter $\omega_g^{(M)}$. For the subsequent training rounds, we introduce the proposed FedGD strategy to reduce the communication burden between the central server and the edge clients, for both the global model broadcasting (downlink communication) and local updates uploading (uplink communication). The workflow is offered in Fig. 2 (b). Specifically, for the $t$-th round, our framework works as follows: – **Broadcasting:** In this stage, the central server performs gradient distillation between the new global model $\omega_g^{(t)}$ and the broadcasted model $\omega_g^{(t-1)}$ in the last round. The objective is to obtain a compressed datastream $\zeta_g^{(t)}$ for downlink communication, which is a tensor of length $m_g^{(t)}$. To achieve this, the server solves the following optimization problem: $$\zeta_g^{(t)} = \arg\min_{\{(x_j, y_j)\}_{j=1}^{m_g^{(t)}}} \left\| \text{optim} \left( \sum_{j=1}^{m_g^{(t)}} \frac{\partial \ell(\omega; (x_j, y_j))}{\partial \omega} \bigg|_{\omega=\omega_g^{(t-1)}} \right) - \Delta \omega_g(t-1, t) \right\|_2^2,$$ where $m_g^{(t)}$ represents the length of tensor $\zeta_g^{(t)}$, and $\Delta \omega_g(t-1, t)$ denotes the difference between $\omega_g^{(t)}$ and $\omega_g^{(t-1)}$. Instead of broadcasting the parameter difference, the server broadcasts $\zeta_g^{(t)} = \{(\hat{x}_j, \hat{y}_j)\}_{j=1}^{m_g^{(t)}}$ to all participating devices, which significantly reduces the downlink burden. – **Local Training:** Upon receiving the broadcasted tensor $\zeta_g^{(t)}$, each edge device recovers the intended global model $\omega_g^{(t)}$ using the global model $\omega_g^{(t-1)}$ from the last round. This recovery process is achieved through one-step gradient descent on $\zeta_g^{(t)}$: $$\omega_g^{(t)} = \omega_g^{(t-1)} - \text{optim} \left( \sum_{(\hat{x}, \hat{y}) \in \zeta_g^{(t)}} \frac{\partial \ell(\omega; (\hat{x}, \hat{y}))}{\partial \omega} \bigg|_{\omega=\omega_g^{(t-1)}} \right).$$ Furthermore, for each edge device $i$, started from the model $\omega_i^{(t)}$, it performs local training on its private dataset for $K$ iterations to update its parameters as $\omega_i^{(t,k)}$: $$\omega_i^{(t,k)} \leftarrow \omega_i^{(t,k-1)} - \gamma \sum_{(x,y) \in B_i} \frac{\partial \ell(\omega; (x, y))}{\partial \omega} \bigg|_{\omega=\omega_i^{(t,k-1)}} \quad \text{for } k \in [K],$$ where $\omega_i^{(t,0)} = \omega_i^{(t)}$, $\omega_i^{(t,K)} = \omega_i^{(t,K)}$, and $B_i$ is the $i$-th random batch drawn from $D_i$. – **Uploading:** After local training, each device $i$ executes gradient distillation between the global model $\omega_g^{(t)}$ and the updated local model $\omega_i^{(t)}$: $$\zeta_i^{(t)} = \arg\min_{\{(x_j, y_j)\}_{j=1}^{m_i^{(t)}}} \left\| \text{optim} \left( \sum_{j=1}^{m_i^{(t)}} \frac{\partial \ell(\omega; (x_j, y_j))}{\partial \omega} \bigg|_{\omega=\omega_i^{(t)}} \right) - (\omega_g^{(t)} - \omega_i^{(t)}) \right\|_2^2,$$ where $m_i^{(t)}$ denotes the length of tensor $\zeta_i^{(t)}$. The value of $m_i^{(t)}$ is related to the difference between $\omega_i^{(t)}$ and $\omega_g^{(t)}$, which varies in each round. Each device $i$ uploads the synthetic samples $\zeta_i^{(t)}$ to the server, which significantly reduces the uplink communication burden. – **Global Aggregation:** Upon receiving the uploaded tensor $\zeta_i^{(t)}$, the central server performs backpropagation on the model $\omega_g^{(t)}$ with data $\zeta_i^{(t)}$ to recover the intended updated local model $\omega_i^{(t)}$ for each device $i$. This is achieved by addressing the following optimization problem: $$\omega_i^{(t)} = \omega_g^{(t)} - \text{optim} \left( \sum_{(\hat{x}, \hat{y}) \in \zeta_i^{(t)}} \frac{\partial \ell(\omega; (\hat{x}, \hat{y}))}{\partial \omega} \bigg|_{\omega=\omega_g^{(t)}} \right).$$ Algorithm 1: Gradient Distillation based Communication-efficient Federated Learning Input: $N$ edge devices with private datasets $\{D_i\}_{i=1}^N$, communication round number $T$, learning rate $\eta$, initial rounds $M$, local update number $K$, batchsize $B$. Output: FL-trained global model $\omega_g^T$. Server Executes: Initialization: Following the standard FedAvg workflow. After $M$ rounds, the central server has $\omega_g^{(M)}$ with $\omega_g^{(M-1)}$, and the device has $\omega_g^{(M-1)}$ for each communication round $t = M, \ldots, T$ do Obtain $\zeta_g^{(t)}$ by performing gradient distillation between $\omega_g^{(t-1)}$ and $\omega_g^{(t)}$ with Eq. (3) for each device $i = 1, 2, \ldots, N$ in parallel do Broadcasting $\zeta_g^{(t)}$ to device $i$ $\zeta_i^{(t)} \leftarrow$ Device Executes $(i, \zeta_g^{(t)})$ Reconstruct the local model $\omega_i^{(t)}$ by one-step gradient descent on $\zeta_i^{(t)}$ with Eq. (7) end $\omega_g^{(t+1)} \leftarrow \frac{1}{N} \sum_{i=1}^N \omega_i^{(t)}$ end Device Executes $(i, \zeta_g^{(t)})$: Reconstruct the global model $\omega_g^{(t)}$ by one-step gradient descent on $\zeta_g^{(t)}$ with Eq. (4) Update the local model as $\omega_i^{(t)}$ by local training on $D_i$ with Eq. (5) Obtain $\zeta_i^{(t)}$ by performing gradient distillation between $\omega_g^{(t)}$ and $\omega_i^{(t)}$ with Eq. (6) Return $\zeta_i^{(t)}$ end The server then aggregates the reconstructed local models $\omega_i^{(t)}$ from all selected devices to obtain an updated global model $\omega_g^{(t+1)}$: $$\omega_g^{(t+1)} = \frac{1}{N} \sum_{i=1}^N \omega_i^{(t)}. \quad (8)$$ In essence, the compact tensor sequence $\zeta_i^{(t)}$ allows the server to efficiently recreate device $i$’s full model update through one forward pass, instead of directly transmitting the high-dimensional model weights $\omega_i^{(t)}$, while preserving accurate information to synchronize the global and local models. Remark. As described above, our approach involves performing gradient distillation at both the central server and edge devices, which introduces slightly higher computational demands for these components. It is important to highlight that, in comparison to conventional FedAvg, our method does not incur longer overall training time. On the contrary, it actually leads to much shorter training times per round. This improvement is attributed to the fact that gradient distillation substantially reduces the amount of data that needs to be transmitted, consequently reducing the time required for transmission. This has been empirically verified in experiment section, as shown in Table 3. This efficiency gain is a significant advantage of our approach, as it allows for quicker model updates. It is worth noting that communication resources are typically much more constrained than computation resources. Therefore, the approach of reducing the communication burden by slightly increasing the computation burden is highly practical for many real-world applications. 2.3 Parallel Version To further mitigate potential efficiency losses due to the extra computational steps, inspired by parallel federated learning framework (Zhang et al., 2023), we concurrently conduct server-side model aggregation while allowing for device-level local training. By overlapping aggregation stage with local update stage, our solution maintains substantial communication savings without compromising computational throughput, as indicated in Fig. 2 (c). Specifically, for the $t$-th round, the parallel version of our framework (Parallel FedGD) works as follows: – Client Side. Table 1: Test accuracy (%) and data transmission size reduction ratios of FedGD and baseline methods on CIFAR-10 using different networks. | Model | Method | Avg. data transmission Volume (MB) | Speedup | Min. data transmission Volume (MB) | Speedup | Accuracy (%) | |-------------|-------------------------|------------------------------------|---------|-----------------------------------|---------|--------------| | MobileNet | FedAvg | 4.21 | 1× | 4.21 | 1× | 82.48 | | | Top-k (Aji & Heafield, 2017) | 2.11 | 2× | 2.11 | 2× | 78.85 | | | FedPAQ (Rezizadeh et al., 2020) | 1.05 | 4× | 1.05 | 4× | 79.77 | | | DAdaQ (Hönig et al., 2022) | 1.92 | 2.19× | 1.05 | 4× | 80.68 | | | AdaGQ (Liu et al., 2023) | 1.57 | 2.68× | 1.05 | 4× | 80.22 | | | FedGD | 0.0328 | 128× | 0.0061 | 690× | 82.19 | | | Parallel FedGD | 0.0472 | 89× | 0.0061 | 690× | 82.08 | | ShuffleNet | FedAvg | 5.42 | 1× | 5.42 | 1× | 83.15 | | | Top-k (Aji & Heafield, 2017) | 2.71 | 2× | 2.71 | 2× | 79.07 | | | FedPAQ (Rezizadeh et al., 2020) | 1.36 | 4× | 1.36 | 4× | 80.49 | | | DAdaQ (Hönig et al., 2022) | 2.64 | 2.05× | 1.36 | 4× | 81.87 | | | AdaGQ (Liu et al., 2023) | 1.98 | 2.74× | 1.36 | 4× | 81.62 | | | FedGD | 0.0421 | 129× | 0.0031 | 1765× | 82.98 | | | Parallel FedGD | 0.0571 | 95× | 0.0031 | 1765× | 82.74 | | ResNet-18 | FedAvg | 11.69 | 1× | 11.69 | 1× | 85.31 | | | Top-k (Aji & Heafield, 2017) | 5.85 | 2× | 5.85 | 2× | 80.14 | | | FedPAQ (Rezizadeh et al., 2020) | 2.92 | 4× | 2.92 | 4× | 83.39 | | | DAdaQ (Hönig et al., 2022) | 4.97 | 2.35× | 2.92 | 4× | 82.47 | | | AdaGQ Liu et al. (2023) | 4.09 | 2.86× | 2.92 | 4× | 82.09 | | | FedGD | 0.0369 | 317× | 0.0092 | 1268× | 85.11 | | | Parallel FedGD | 0.0508 | 230× | 0.0154 | 759× | 85.03 | • **Local Training:** Each device $i$ receives the broadcasted gradient distillation tensor $\zeta^{(t-1)}_g$ from the server, and reconstructs the global model $\omega^{(t-1)}_g$ based on the previous model $\omega^{(t-2)}_g$: $$\omega^{(t-1)}_g = \omega^{(t-2)}_g - \text{optim} \left( \sum_{(\hat{x}, \hat{y}) \in \zeta^{(t-1)}_g} \frac{\partial \ell(\omega; (\hat{x}, \hat{y}))}{\partial \omega} \bigg|_{\omega=\omega^{(t-2)}_g} \right).$$ Device $i$ then performs local training to get the updated local model $\omega^{(t)}_i$ according to Eq. (5). • **Uploading:** After completing the local training, device $i$ performs gradient distillation to get $\zeta^{(t)}_i$ based on $\omega^{(t)}_i$ and $\omega^{(t-1)}_g$ according to Eq. (6). $\zeta^{(t)}_i$ is then uploaded to the central server. – **Server Side.** • **Broadcasting:** During local training and gradient distillation on edge device $i$ to obtain the local model $\omega^{(t)}_i$ and distilled tensor $\zeta^{(t)}_i$, the central server performs global aggregation in parallel to derive the updated global model $\omega^{(t)}_g$ and its distilled form $\zeta^{(t)}_g$. By leveraging parallel optimization, when the server receives all distilled local gradients $\zeta^{(t)}_i$, it will already have finalized the updated global gradient $\zeta^{(t)}_g$ through concurrent aggregation. The server can then immediately broadcast $\zeta^{(t)}_g$ to the edge devices to start the next round of federated optimization. • **Global Aggregation:** While the broadcast stage is performing, the central server reconstructing $\omega^{(t)}_i$ by: $$\omega^{(t)}_i = \omega^{(t-1)}_g - \text{optim} \left( \sum_{(\hat{x}, \hat{y}) \in \zeta^{(t)}_g} \frac{\partial \ell(\omega; (\hat{x}, \hat{y}))}{\partial \omega} \bigg|_{\omega=\omega^{(t-1)}_g} \right).$$ Based on $\omega^{(t)}_i$, the central server can get the new global model: $\omega^{(t+1)}_g = \frac{1}{N} \sum_{i=1}^{N} \omega^{(t)}_i$. It then performs gradient distillation between the two latest global models: $$\zeta^{(t+1)}_g = \arg \min_{\{(x_j, y_j)\}_{j=1}^{m^{t+1}_g}} \left\| \text{optim} \left( \sum_{j=1}^{m^{t+1}_g} \frac{\partial \ell(\omega; (x_j, y_j))}{\partial \omega} \bigg|_{\omega=\omega^{(t)}_g} \right) - \Delta \omega_g(t, t+1) \right\|^2.$$ This parallel execution scheme, with the server and devices simultaneously performing their respective operations, significantly improves the computational efficiency of FedGD. 3 EXPERIMENTS 3.1 EXPERIMENTAL SETUP Baselines. We compare our proposed FedGD approach with five baseline methods: (1) FedAvg, (2) Top-k (Aji & Heafield, 2017), (3) FedPAQ (Reiszadeh et al., 2020), (4) DAdaQ (Hönig et al., 2022), and (5) AdaGQ (Liu et al., 2023). Datasets. Experiments are conducted on three benchmark datasets: CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100, and a medical image dataset PathMNIST (Yang et al., 2023). PathMNIST is a dataset for predicting survival from colorectal cancer histology slides. It contains 100,000 images in the training set and 7,180 images in the test set. The image size is $3 \times 28 \times 28$. For CIFAR-10, we evaluate the generalization of our approach to different network architectures by testing with MobileNet (Howard et al., 2017), ShuffleNet (Zhang et al., 2018), and ResNet-18 (He et al., 2016). To validate performance across datasets, we use MobileNet for experiments on CIFAR-100 and PathMNIST. All baselines use the same network architecture as FedGD for a fair comparison. Implementation Details. We implement FedGD as well as the baseline methods using the PyTorch framework. The SGD optimizer with a learning rate of 0.01 is used for all approaches. Unless otherwise stated, the batch size is 64 and the number of local epochs per round is set to 3. The training process spans 150 communication rounds, with gradient distillation initiated from the 31st round onwards. Following the commonly used simulation setting (Liu et al., 2023; Hönig et al., 2022), in our experiments, we simulate 50 virtual devices, and set the bandwidth to 50 Mbps. 3.2 PERFORMANCE EVALUATION Table 1 shows the test accuracy, average and minimum data transmission size reduction ratios of FedGD and the baseline methods in CIFAR-10 using three widely used network architectures. FedAvg serves as the comparison benchmark. Among the compared methods, Top-k and FedPAQ, which do not adopt adaptive schemes, have equal average and minimum compression ratios, and lower accuracy than FedAvg (by 3.63% and 2.71% for MobileNet). DAdaQ and AdaGQ apply different adaptive quantization schemes based on time and gradient norms. AdaGQ saves $2.68\times$ in communication compared to $2.19\times$ for DAdaQ, but with a 0.46% lower precision. Our method, FedGD achieves higher precision than all baselines at 82.08%, only 0.4% lower than the uncompressed FedAvg. Most notably, it provides average $128\times$ compression, reaching up to $690\times$ in later rounds as the differences in network parameters shrink. Across MobileNet, ShuffleNet, and ResNet-18, FedGD significantly outperforms baselines in balancing training accuracy and communication efficiency. To evaluate the generalizability of our approach across different datasets, we further validate it on CIFAR-100 and PathMNIST in addition to the previous experiments. As shown in Table 2: | Model | Method | Avg. data transmission Volume (MB) | Speedup | Min. data transmission Volume (MB) | Speedup | Accuracy (%) | |-------------|-------------------------|------------------------------------|---------|-----------------------------------|---------|--------------| | CIFAR-100 | FedAvg | 11.69 | 1× | 11.69 | 1× | 60.17 | | | Top-k (Aji & Heafield, 2017) | 5.85 | 2× | 5.85 | 2× | 56.26 | | | FedPAQ (Reiszadeh et al., 2020) | 2.92 | 4× | 2.92 | 4× | 57.38 | | | DAdaQ (Hönig et al., 2022) | 6.22 | 1.88× | 2.92 | 4× | 58.51 | | | AdaGQ (Liu et al., 2023) | 5.65 | 2.07× | 2.92 | 4× | 58.37 | | | FedGD | 0.0089 | 118× | 0.0122 | 958× | 59.84 | | PathMNIST | FedAvg | 11.69 | 1× | 11.69 | 1× | 87.81 | | | Top-k (Aji & Heafield, 2017) | 5.85 | 2× | 5.85 | 2× | 83.20 | | | FedPAQ (Reiszadeh et al., 2020) | 2.92 | 4× | 2.92 | 4× | 84.74 | | | DAdaQ (Hönig et al., 2022) | 4.97 | 2.36× | 2.92 | 4× | 86.17 | | | AdaGQ (Liu et al., 2023) | 4.09 | 2.87× | 2.92 | 4× | 85.92 | | | FedGD | 0.0345 | 339× | 0.0614 | 1904× | 87.54 | Table 3: Results with different batch size, local update number and Pre-distillation rounds. $T_g$ represent the calculated time for gradient distillation, $T_c$ represents the upload communication time of standard FL, $T_c^*$ represents the time of upload communication using gradient distillation FL. | Batch size $B$ | Avg. data volume (MB) | Accuracy (%) | $T_g$ (s) | $T_c^*$ (s) | $T_g + T_c^*$ (s) | $T_c$ (s) | Speedup | |----------------|-----------------------|--------------|-----------|-------------|------------------|----------|---------| | 32 | $3.28 \times 10^{-2}$ | 85.34 | 8.97 | 0.05 | 9.02 | 18.78 | 2.08× | | 64 | $3.69 \times 10^{-2}$ | 85.11 | 10.09 | 0.06 | 10.15 | 18.78 | 1.85× | | 128 | $4.47 \times 10^{-2}$ | 84.97 | 12.22 | 0.07 | 12.29 | 18.78 | 1.53× | | Local round number $K$ | Avg. data volume (MB) | Accuracy (%) | $T_g$ (s) | $T_c^*$ (s) | $T_g + T_c^*$ (s) | $T_c$ (s) | Speedup | |------------------------|-----------------------|--------------|-----------|-------------|------------------|----------|---------| | 1 | $2.08 \times 10^{-2}$ | 85.79 | 5.69 | 0.03 | 5.72 | 18.78 | 3.28× | | 2 | $2.73 \times 10^{-2}$ | 85.42 | 7.46 | 0.04 | 7.50 | 18.78 | 2.5× | | 3 | $3.69 \times 10^{-2}$ | 85.11 | 10.09 | 0.06 | 10.15 | 18.78 | 1.85× | | Pre-distillation rounds $M$ | Avg. data volume (MB) | Accuracy (%) | $T_g$ (s) | $T_c^*$ (s) | $T_g + T_c^*$ (s) | $T_c$ (s) | Speedup | |-----------------------------|-----------------------|--------------|-----------|-------------|------------------|----------|---------| | 30 | $3.69 \times 10^{-2}$ | 85.11 | 10.09 | 0.06 | 10.15 | 18.78 | 1.85× | | 50 | $2.95 \times 10^{-2}$ | 85.17 | 8.07 | 0.05 | 8.12 | 18.78 | 2.31× | | 80 | $2.12 \times 10^{-2}$ | 85.19 | 5.80 | 0.03 | 5.83 | 18.78 | 3.22× | In summary, FedGD achieves state-of-the-art compression ratios while maintaining high model performance, validating the effectiveness of our proposed gradient distillation approach. ### 3.3 Ablation Analysis on Hyperparameters Table 3 displays the compression ratios achieved by our method for varying batch sizes (32, 64, 128), round numbers of local updates (1, 2, 3), and pre-distillation rounds (30, 50, 80). It is evident that lower batch sizes and fewer local updates result in higher compression, as they entail smaller model parameter differences between rounds, containing less information for compression. Notably, compared to a batch size of 128, using a batch size of 32 further reduces communication by 1.36×. Moreover, increasing the number of pre-distillation rounds (i.e., applying compression in later training stages) leads to a reduction in average uploaded parameters, as differences diminish over time. When $M = 80$ rounds, only an average of $2.12 \times 10^{-2}$ MB parameters are uploaded. Additionally, we provide the computation time $T_g$ for gradient distillation, the communication time $T_c^*$ for uploading tensors generated by gradient distillation, and the communication time $T_c$ for uploading model differences. The combined time for $T_g$ and $T_c^*$ is significantly smaller than $T_c$, indicating that our method substantially shortens the training time per round. This improvement is attributed to the fact that gradient distillation significantly reduces the amount of data that needs to be transmitted, consequently reducing the time required for transmission. ### 4 Conclusion In this study, we introduced gradient distillation-based communication-efficient federated learning (FedGD), a novel approach where devices and the server synthesize tensor sequences to represent model updates, rather than transmitting raw model differences. Our key innovation lies in distilling the structural essence of gradients, as opposed to directly compressing them. This enables the transmission of only essential information for synchronization, bypassing the need to transmit the entire set of raw parameter differences. Experimental results demonstrate that FedGD substantially reduces communication overhead without sacrificing accuracy significantly. This highlights its potential to scale privacy-preserving distributed training across edge networks by leveraging model-specific representations. REFERENCES Alham Fikri Aji and Kenneth Heafield. Sparse communication for distributed gradient descent. *arXiv preprint arXiv:1704.05021*, 2017. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6fcb4967418bfb8ac142f64a-Paper.pdf. Rui Chen, Dian Shi, Xiaoqi Qin, Dongjie Liu, Miao Pan, and Shuguang Cui. Service delay minimization for federated learning over mobile devices. *IEEE Journal on Selected Areas in Communications*, 41(4):990–1006, 2023. Travers Ching, Daniel S Himmelstein, Brett K Beaulieu-Jones, Alexandr A Kalinin, Brian T Do, Gregory P Way, Enrico Ferrero, Paul-Michael Agapow, Michael Zietz, Michael M Hoffman, et al. Opportunities and obstacles for deep learning in biology and medicine. *Journal of The Royal Society Interface*, 15(141):20170387, 2018. Pierre Courtiol, Charles Maussion, Matahi Moairi, Elodie Pronier, Samuel Pilcer, Meriem Sefta, Pierre Manceron, Sylvain Toldo, Mikhail Zaslavskiy, Nolwenn Le Stang, et al. Deep learning-based classification of mesothelioma improves prediction of patient outcome. *Nature medicine*, 25(10):1519–1525, 2019. Rong Dai, Li Shen, Fengxiang He, Xinmei Tian, and Dacheng Tao. Dispfl: Towards communication-efficient personalized federated learning via decentralized sparse training. *arXiv preprint arXiv:2206.00187*, 2022. GDPR. General data protection regulation, 2016. URL https://gdprinfo.eu/. Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, and Viveck Cadambe. Local sgd with periodic averaging: Tighter analysis and adaptive synchronization. *Advances in Neural Information Processing Systems*, 32, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. Robert Hönig, Yiren Zhao, and Robert Mullins. Dadaquant: Doubly-adaptive quantization for communication-efficient federated learning. In *International Conference on Machine Learning*, pp. 8852–8866. PMLR, 2022. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. *arXiv preprint arXiv:1704.04861*, 2017. Ahmed Imteaj, Urmish Thakker, Shiqiang Wang, Jian Li, and M Hadi Amini. A survey on federated learning for resource-constrained iot devices. *IEEE Internet of Things Journal*, 9(1):1–24, 2021. Steven N Kaplan. Campeau’s acquisition of federated: Value destroyed or value added. *Journal of Financial Economics*, 25(2):191–212, 1989. Alex Krizhevsky, Geoffrey Hinton, et al. *Learning multiple layers of features from tiny images*. Toronto, ON, Canada, 2009. Yijing Li, Xiaofeng Tao, Xuefei Zhang, Junjie Liu, and Jin Xu. Privacy-preserved federated learning for autonomous driving. *IEEE Transactions on Intelligent Transportation Systems*, 23:8423–8434, 2021.
uqPnesiGGi
Another claimed contribution is to capture both local and long-range information, while it is not discussed how the motif masking captures long-range other than random masking. The masked motifs can still be close, and the message passing is still within the k-hop.
Attribute reconstruction is used to predict node or edge features in the pre-training of graph neural networks. Given a large number of molecules, they learn to capture structural knowledge, which is transferable for various downstream property prediction tasks and vital in chemistry, biomedicine, and material science. Previous strategies that randomly select nodes to do attribute masking leverage the information of local neighbors. However, the over-reliance of these neighbors inhibits the model’s ability to learn long-range dependencies from higher-level substructures. For example, the model would learn little from predicting three carbon atoms in a benzene ring based on the other three but could learn more from the inter-connections between the functional groups, or called chemical motifs. To explicitly determine inter-motif knowledge transfer of pre-trained model, we define inter-motif node influence measures. Then, we propose and investigate motif-aware attribute masking strategies to capture long-range inter-motif structures by leveraging the information of atoms in neighboring motifs. Once each graph is decomposed into disjoint motifs, the features for every node within a sample motif are masked. The graph decoder then predicts the masked features of each node within the motif for reconstruction. We evaluate our approach on eight molecular property prediction datasets and demonstrate its advantages. 1 INTRODUCTION Molecular property prediction has been an important topic of study in fields such as physical chemistry, physiology, and biophysics (Wu et al., 2017). It can be defined as a graph label prediction problem and addressed by machine learning. However, graph learning models such as graph neural networks (GNNs) must overcome issues in data scarcity, as the creation and testing of real-world molecules is an expensive endeavor (Chang et al., 2022). To address labeled data scarcity, model pre-training has been utilized as a fruitful strategy for improving a model’s predictive performance on downstream tasks, as pre-training allows for the transfer of knowledge from large amounts of unlabeled data. The selection of pre-training strategy is still an open question, with contrastive tasks (Zhu et al., 2021) and predictive/generative tasks (Hu et al., 2020a) being the most popular methods. Attribute reconstruction is one predictive method for graphs that utilizes masked autoencoders to predict node or edge features (Hu et al., 2020a; Kipf & Welling, 2016; Xia et al., 2022). Masked autoencoders have found success in vision and language domains (He et al., 2022; Devlin et al., 2018) and have been adopted as a pre-training objective for graphs as the reconstruction task is able to transfer structural pattern knowledge (Hu et al., 2020a), which is vital for learning specific domain knowledge such as valency in material science. Additional domain knowledge which is important for molecular property prediction is that of functional groups, also called chemical motifs (Pope et al., 2019). The presence and interactions between chemical motifs directly influence molecular properties, such as reactivity and solubility (Frechet, 1994; Plaza et al., 2014). Prior work in message passing for quantum chemistry has shown that long-range dependencies are important for downstream prediction in chemical domains (Gilmer et al., 2017). Therefore, to capture the interaction information between motifs, it is important to transfer inter-motif structural knowledge and other long-range dependencies during the pre-training of graph neural networks. Unfortunately, the random attribute masking strategies used in previous work for graph pre-training were not able to capture the long-range dependencies inherent in inter-motif knowledge (Kipf & Figure 1: Our MoAMa masks every node in sampled motifs to pre-train GNNs. The full masking of a motif forces the GNNs to learn to (1) pass feature information across motifs and (2) pass local structural information within the motif. Compared to the traditional random attribute masking strategies, the motif-aware masking captures the most essential information to learn graph embeddings. Random masking would put most of the pre-training effort on passing the feature information within a motif, e.g., predicting two carbon nodes in a benzene ring based on the other four. Recent successes in vision and language domains have shown the utility of masking semantically related regions, such as pixel batches (Li et al., 2022; Xie et al., 2022; He et al., 2021) and multi-token spans (Levine et al., 2020; Sun et al., 2019; Joshi et al., 2020), and have demonstrated that a random masking strategy is not guaranteed to transfer necessary inter-part relations and intra-part patterns (Li et al., 2022). To better enable the transfer of long-range inter-part relations downstream, we propose a novel semantically-guided masking strategy based on chemical motifs. In Figure 1, we visually demonstrate our method for motif-aware attribute masking, where each molecular graph is decomposed into disjoint motifs. Then the node features for each node within the motif will be masked by a mask token. A graph decoder will predict the masked features of each node within the motif as the reconstruction task. The benefits of this strategy are twofold. First, because all features of the nodes within the motif are masked, our strategy reduces the amount of feature information being passed within the motif and relieves the propagation bottleneck, allowing for the greater transfer of inter-motif feature and structural information. Second, the masking of all intra-motif node features explicitly forces the decoder to transfer intra-motif structural information. A novel graph pre-training solution based on the Motif-aware Attribute Masking strategy, called MoAMa, is able to learn long-range inter-motif dependencies with knowledge of intra-motif structure. We evaluate our strategy on eight molecular property prediction datasets and demonstrate its improvement to inter-motif knowledge transfer as compared to previous strategies. 2 RELATED WORK Molecular graph pre-training The prediction of molecular properties based on graphs is important (Wu et al., 2017). Molecules are scientific data that are time- and computation-intensive to collect and annotate for different property prediction tasks (Liu et al., 2023). Many self-supervised learning methods (Hu et al., 2020a; Hou et al., 2022; Zhang et al., 2021; Kim et al., 2022; Xia et al., 2023) were proposed to capture the transferable knowledge from another large scale of molecules without annotations. For example, AttrMask (Hu et al., 2020a) randomly masked atom attributes for prediction. GraphMAE (Hou et al., 2022) pre-trained the prediction model with generative tasks to reconstruct node and edge attributes. D-SLA (Kim et al., 2022) used contrastive learning based on graph edit distance. These pre-training tasks could not well capture useful knowledge for various domain-specific tasks since they fail to incorporate important domain knowledge in pre-training. A great line of prior work (Zhang et al., 2021; Rong et al., 2020; Sun et al., 2021) used graph motifs which are the recurrent and statistically significant subgraphs to characterize the domain knowledge contained in molecular graph structures, e.g., functional groups. However, their solutions were tailored to specific frameworks for either generation-based or contrast-based molecular pre-training. Additionally, explicit motif type generation/prediction inherently does not transfer intra-motif structural information and is computationally expensive due to the large number of prediction classes. In this work, we study on the strategies of attribute masking with the awareness of domain knowledge (i.e., motifs), which plays an essential role in self-supervised learning frameworks (Xia et al., 2023). Masking strategies on molecules Attribute masking of atom nodes is a popular method in graph pre-training given its broad usage in predictive, generative, and contrastive self-supervised tasks (Hu et al., 2020a,b; Hou et al., 2022; You et al., 2020, 2021). For example, predictive and generative pre-training tasks (Hu et al., 2020a; Hou et al., 2022; Xia et al., 2023) mask atom attributes for prediction and reconstruction. Contrastive pre-training tasks (You et al., 2020, 2021) mask nodes to create another data view for alignment. Despite the widespread use of attribute masking in molecular pre-training, there is a notable absence of comprehensive research on its strategy and effectiveness. Previous studies have largely adopted strategies from the vision and language domains (He et al., 2022; Devlin et al., 2018), where atom attributes are randomly masked with a predetermined ratio. Since molecules are atoms held together by strict chemical rules, the data modality of molecular graphs is essentially different from natural images and languages. For molecular graphs, random attribute masking results in either over-reliance on intra-motif neighbors (Dwivedi et al., 2023) or breaking the inter-motif connections via random edge masking. In this work, we introduce a novel strategy of attribute masking, which turns out to capture and transfer useful knowledge from intra-motif structures and long-range inter-motif node features. 3 PRELIMINARIES Graph property prediction Given a graph $G = (\mathcal{V}, \mathcal{E}) \in \mathcal{G}$ with the node set $\mathcal{V}$ for atoms and the edge set $\mathcal{E} \subset \mathcal{V} \times \mathcal{V}$ for bonds, we have a $d$-dimensional node attribute matrix $X \in \mathbb{R}^{|\mathcal{V}| \times d}$ that represents atom features such as atom type and chirality. We use $y \in \mathcal{Y}$ as the graph-level property label for $G$, where $\mathcal{Y}$ represents the label space. For graph property prediction, a predictor with the encoder-decoder architecture is trained to encode $G$ into a representation vector in the latent space and decode the representation to predict $\hat{y}$. The training process optimizes the parameters to make $\hat{y}$ to be the same as the true label value $y$. A GNN is a commonly used encoder that generates $k$-dimensional node representation vectors, denoted as $h_v \in \mathbb{R}^k$, for any node $v \in \mathcal{V}$: $$H = \{h_v : v \in \mathcal{V}\} = \text{GNN}(G) \in \mathbb{R}^{|\mathcal{V}| \times k}. \quad (1)$$ Here $H$ is the node representation matrix for the graph $G$. Without loss of generality, we implement Graph Isomorphism Networks (GIN) (Xu et al., 2019) as the choice of GNN in accordance with previous work (Hu et al., 2020a). Once the set of node representations are created, a READOUT($\cdot$) function (such as max, mean, or sum) is used to summarize the node-level representation into graph-level representation $h_G$ for any $G$: $$h_G = \text{READOUT}(H) \in \mathbb{R}^k. \quad (2)$$ The graph-level representation vector $h_G$ is subsequently passed through a multi-layer perceptron (MLP) to generate the label prediction $\hat{y}$, which exists in the label space $\mathcal{Y}$: $$\hat{y} = \text{MLP}(h_G) \in \mathcal{Y}. \quad (3)$$ GNN pre-training Random initialization of the predictor’s parameters would easily result in suboptimal solutions for graph property prediction. This is because the number of labeled graphs is usually small. It prevents a proper coverage of task-specific graph and label spaces (Hu et al., 2020a; Liu et al., 2023). To improve generalization, GNN pre-training is often used to warm-up the model parameters based on a much larger set of molecules without labels. In this work, we focus on the attribute masking strategy for GNN pre-training that aims to predict the masked values of node attributes given the unlabeled graphs. 4 INTER-MOTIF INFLUENCE To measure the influence generally from (either intra-motif or inter-motif) source nodes on a target node \( v \), we design a measure that quantifies the influence from any source node \( u \) in the same graph \( G \), denoted by \( s(u, v) \). \( h_v \) was learned by Eq. (1) and was influenced by node \( u \). When the embedding of \( u \) is eliminated from GNN initialization, i.e., set \( h_u^{(0)} = 0 \), Eq. (1) would produce a new representation vector of node \( v \), denoted by \( h_{v,w/o\ u} \). We use the \( L^2 \)-norm to define the influence: \[ s(u, v) = \| h_v - h_{v,w/o\ u} \|_2. \tag{4} \] The collective influence from a group of nodes in a motif \( M = (V_M, E_M) \) is measured as follows: \[ s_{\text{motif}}(v, M) = \frac{1}{|V_M \setminus \{v\}|} \sum_{u \in V_M \setminus \{v\}} s(u, v). \tag{5} \] Suppose the target node \( v \) is in the motif \( M_v = (V_{M_v}, E_{M_v}) \). Using \( M_v \) as the target motif, the influence from intra-motif and inter-motif nodes can be calculated as: \[ s_{\text{intra}}(v) = s_{\text{motif}}(v, M_v); \quad s_{\text{inter}}(v) = \frac{\sum_{M \in \mathcal{M} \setminus \{M_v\}} |V_M| \times s_{\text{motif}}(v, M)}{|V \setminus V_{M_v}|}. \tag{6} \] Usually the number of inter-motif nodes is significantly bigger than the number of intra-motif nodes, i.e., \( |V| \gg |V_{M_v}| \), which reveals two issues in the influence measurements. First, when the target motif is too small (e.g., has only one or two nodes), the intra-motif influence cannot be defined or is defined on the interaction with only one neighbor node. Second, most inter-motif nodes are not expected to have any influence, so the average function in Eq. (5) would lead comparisons to be biased to intra-motif influence. To address the two issues, we constrain the influence summation to be on the same number of nodes (i.e., top-\( k \)) from the intra-motif and inter-motif node groups. Explicitly, this means \( u \in V_M \setminus \{v\} \) in Eq. (5) is sampled from the top-\( k \) most influential nodes (top-3). The ratio of inter- to intra-motif influence over the graph dataset \( \mathcal{G} \) is then defined as: \[ \text{InfRatio}_{\text{node}} = \frac{1}{\sum_{(V,E) \in \mathcal{G}} |V|} \sum_{(V,E) \in \mathcal{G}} \sum_{v \in V} \frac{s_{\text{inter}}(v)}{s_{\text{intra}}(v)}, \tag{7} \] \[ \text{InfRatio}_{\text{graph}} = \frac{1}{|\mathcal{G}|} \sum_{G=(V,E) \in \mathcal{G}} \frac{1}{|V|} \sum_{v \in V} \frac{s_{\text{inter}}(v)}{s_{\text{intra}}(v)}, \tag{8} \] where the average function is performed at the node level and graph level, respectively. Eq. (7) directly measures the influence ratios of all nodes \( v \) within the dataset \( \mathcal{G} \). However, this measure may include bias due to the distribution of nodes within each graph. We alleviate this bias in Eq. (8) by averaging influence ratios across each graph first. While the InfRatio measurements are able to compare general inter- and intra-motif influences, these measures combine all inter-motif nodes into one set and do not consider the number of motifs in each graph. We further define rank-based measures that consider the distribution of motif counts across \( \mathcal{G} \). Let \( \{M_1, ..., M_i, ..., M_n\} \) be an ordered set, where \( M_i \in \mathcal{M} \) and \( s_{\text{motif}}(v, M_i) \geq s_{\text{motif}}(v, M_j) \) if \( i < j \). If \( M_i = M_v \), we define \( \text{rank}_v = i \). Note that graphs with only one motif are excluded as the distinction between inter and intra-motif nodes loses meaning. From this ranking, we define our score for inter-motif node influence averaged at the node, motif, and graph levels, derived from a similar score measurement used in information retrieval, Mean Reciprocal Rank (MRR) (Craswell, 2009): \[ \text{MRR}_{\text{node}} = \frac{1}{\sum_{(V,E) \in \mathcal{G}} |V|} \sum_{(V,E) \in \mathcal{G}} \sum_{v \in V} \frac{1}{\text{rank}_v}, \tag{9} \] \[ \text{MRR}_{\text{graph}} = \frac{1}{|\mathcal{G}|} \sum_{(V,E) \in \mathcal{G}} \frac{1}{|V|} \sum_{v \in V} \frac{1}{\text{rank}_v}, \tag{10} \] \[ \text{MRR}_{\text{motif}} = \sum_{n=2}^{N} \frac{|\mathcal{G}^{(n)}|}{|\mathcal{G}| \sum_{(V,E) \in \mathcal{G}^{(n)}} |V|} \sum_{(V,E) \in \mathcal{G}^{(n)}} \sum_{v \in V} \frac{1}{\text{rank}_v}, \tag{11} \] where \( \mathcal{G}^{(n)} \subset \mathcal{G} \) is the set of graphs that contain \( n \in [2, ..., N] \) motifs. Similar to the InfRatio measurements, MRR\textsubscript{node} directly captures the impact of the influence ranks for each node within the full graph set, whereas MRR\textsubscript{graph} alleviates bias on the number of nodes within a graph by averaging across individual graphs first. Because these rank-based measurements are intrinsically dependent on the number of motifs within each graph, we additionally define MRR\textsubscript{motif} which weights the measurement towards popular motif counts within the data distribution. In information retrieval, MRR scores are used to quantify how well a system can return the most relevant item for a given query. Higher MRR scores indicate that relevant items were returned at higher ranks for each query. However, as opposed to traditional MRR measurements, where a higher rank for the most relevant item indicates better performance, lower scores are preferred for our MRR measurements as lower intra-motif influence rank indicate greater inter-motif node influence. In Sec. 6, we show the inter-motif node influence measurements of previous pre-trained models. 5 Proposed Solution In this section, we present our novel solution named MoAMa for effectively pre-training graph neural networks on molecular data. We will give details about the strategy of motif-aware attribute masking and reconstruction. Each molecule $G$ will have some portion of their node masked according to domain knowledge based motifs. We replace the node attributes of all masked nodes with a special mask token. Then, the GNN in Eq. (1) encodes the masked graph to the node representation space, and an MLP reconstructs the atom types for the attribute masked molecule. 5.1 Knowledge-based Motif Extraction To leverage the expertise from the chemistry domain, we extract motifs for molecules using the BRICS (Breaking of Retrosynthetically Interesting Chemical Substructures) algorithm (Degen et al., 2008). This algorithm leverages chemical domain knowledge by creating 16 rules for decomposition, the rules of which define the bonds that should be cleaved from the molecule in order to create a multi-set of disjoint subgraphs. Two key strengths of the BRICS algorithm over a motif-mining strategy (Geng et al., 2023) is that no training is required and important structural features, such as rings, are inherently preserved. For each graph $G$, the BRICS algorithm decomposes the full graph into separate motifs. We denote the decomposition result as $\mathcal{M}_G = \{M_1, M_2, ..., M_n\}$, which is a set of $n$ motifs. Each motif $M_i = (\mathcal{V}_i, \mathcal{E}_i)$, for $i \in \{1, 2, ..., n\}$, is a disjoint subgraph of $G$ such that $\mathcal{V}_i \subset \mathcal{V}$ and $\mathcal{E}_i \subset \mathcal{E}$. For each motif multi-set $\mathcal{M}_G$, the union of all motifs $M_i \in \mathcal{M}_G$ should equal $G$. Formally, this means $\mathcal{V} = \bigcup_i \mathcal{V}_i$ and $\mathcal{E} = (\bigcup_i \mathcal{E}_i) \cup E_x$, where $E_x$ represents all the edges removed between motifs during the BRICS decomposition. Within the ZINC15 dataset (Sterling & Irwin, 2015), used for pre-training, each molecule has an average of 9.8 motifs, each of which have an average of 2.4 atoms. 5.2 Motif-aware Attribute Masking and Reconstruction To perform motif-aware attribute masking, $m$ motifs are sampled to form the multi-set $\mathcal{M}'_G \subset \mathcal{M}_G$ such that $(\sum_{(\mathcal{V}_i, \mathcal{E}_i) \in \mathcal{M}'_G} |\mathcal{V}_i|)/|\mathcal{V}| = \alpha$, for $\alpha$ is a chosen ratio value. The motifs sampled for $\mathcal{M}'_G$ must adhere to two criteria: (1) each node within the motif must be within a $k$-hop neighborhood ($k$ equals number of GNN layers) of an inter-motif node, and (2) sampled motifs may not be adjacent. These two criteria guarantee inter-motif knowledge access for each masked node. To adhere to the above criteria and account for variable motif sizes, we allow for some flexibility in the value for $\alpha$. We choose the bounds $0.15 < \alpha < 0.25$ in accordance to those used in previous works ($\alpha = 0.15$ (Hu et al., 2020a) and $\alpha = 0.25$ (Hou et al., 2022)). Given a selected motif $M \in \mathcal{M}'_G$, nodes within $M$ have their attributes masked by replacing them with a mask token [MASK], which is a vector $m \in \mathbb{R}^d$. Each element in $m$ is a special value that is not present within the attribute space for that particular dimension. For example, we may set the attribute for the atom type dimension in $m$ to the value 119, as we totally have 118 atom types (Hu et al., 2020a). We use $\mathcal{V}_{[MASK]} = \{v \in \mathcal{V}_i : M_i = (\mathcal{V}_i, \mathcal{E}_i) \in \mathcal{M}'_G\}$ to denote the set of all the masked nodes. We then define the input node features in the masked attribute matrix $X_{[MASK]} \in \mathbb{R}^{|\mathcal{V}| \times d}$ for any $v \in \mathcal{V}$ using the following equation: $$ (X_{[MASK]})_v = \begin{cases} X_v, & v \notin \mathcal{V}_{[MASK]}, \\ m, & v \in \mathcal{V}_{[MASK]}, \end{cases} $$ (12) where \((X_{[\text{MASK}]})_v\) and \(X_v\) denote the row of the node \(v\) in \(X_{[\text{MASK}]}\) and \(X\), respectively. With a GNN encoder, all nodes with attributes \(X_{[\text{MASK}]}\) for the masked graph \(G_{[\text{MASK}]}\) are encoded to the latent representation space according to Eq. (1): \(H = \text{GNN}(G_{[\text{MASK}]})\). \(H\) is then used to define the reconstruction loss of the node attributes: \[ L_{\text{rec}} = \mathbb{E}_{u \in V_{[\text{MASK}]}} [\log p(X|H)], \] (13) where \(p(X|H)\) for the reconstruction attribute value is inferred by a decoder. In practice, reconstruction loss is measured using the scaled cosine error (SCE) (Hou et al., 2022), which calculates the difference between the probability distribution for the reconstruction attributes and the one-hot encoded target label vector. This choice of reconstruction loss is further discussed in later sections. 5.3 Design Space of the Attribute Masking Strategy The design space of the motif-aware node attribute masking includes the following four parts: **Masking distribution** We investigate the influence of masking distribution to the masking strategy using two factors to control the distribution of masked attributes: - Percentage of nodes within a motif selected for masking: we propose to mask nodes from the selected motifs at different percentages. The percentage indicates the strength of the masked domain knowledge, which affects the hardness of the pre-training task of the attribute reconstruction. - Dimension of the attributes: We propose to conduct either node-wise or element-wise (dimension-wise) masking. Element-wise masking selects different nodes for masking in different dimensions according to the percentage, while node-wise masking selects different nodes for all-dimensional attribute masking in different motifs. **Reconstruction target** Existing molecular graph pre-training methods heavily rely on two atom attributes: atom type and chirality. Therefore, the reconstructive task could include one or both attributes using one or two different decoders. Experiments will find the most effective task definition. **Reconstruction loss** We study different implementations of reconstruction loss functions for \(L_{\text{rec}}\). They include cross entropy (CE), scaled cosine error (SCE) (Hou et al., 2022), and mean square error (MSE). GraphMAE (Hou et al., 2022) suggested that SCE was the best loss function, however, it is worth investigating the effect of the loss function choices in the motif-based study. Additionally, attribute masking focuses on local graph structures and suffers from representation collapse (Hu et al., 2020a; Hou et al., 2022). To address this issue, we use a knowledge-enhanced auxiliary loss \(L_{\text{aux}}\) to complement \(L_{\text{rec}}\). Given any two graphs \(G_i\) and \(G_j\) from the graph-based chemical space \(G\), \(L_{\text{aux}}\) first calculates the Tanimoto similarity (Bajusz et al., 2015) between \(G_i\) and \(G_j\) as Tanimoto\((G_i, G_j)\) based on the bit-wise fingerprints, which characterizes frequent fragments in the molecular graphs. Then \(L_{\text{aux}}\) aligns the latent representations with the Tanimoto similarity using the cosine similarity, inspired by previous work (Atsango et al., 2022). Formally, we define: \[ L_{\text{aux}} = \sum_{i,j} (\text{Tanimoto}(G_i, G_j) - \cosine(h_{G_i}, h_{G_j})), \quad 1 \leq i, j \leq |G|, i \neq j, \] (14) where \(h_{G_i}\) and \(h_{G_j}\) are the graph representation of \(G_i\) and \(G_j\), respectively. The full pre-training loss is \(L = \beta L_{\text{rec}} + (1 - \beta)L_{\text{aux}}\), where \(\beta\) is a hyperparameter to balance these two loss terms (\(\beta = 0.5\)). **Decoder model** The decoder trained via Eq. (13) could be a GNN or a MLP. Although the GNN decoder might be powerful (Hou et al., 2022), we are curious if the MLP delivers a comparable or better performance with higher efficiency. 6 EXPERIMENTS 6.1 Experimental Settings **Datasets** Following the setting of previous studies (Hou et al., 2022; Kim et al., 2022; Xia et al., 2023), 2 million unlabeled molecules from the ZINC15 dataset (Sterling & Irwin, 2015) was used to pre-train the GNN models. To evaluate the performance on downstream tasks, experiments were conducted across eight binary classification benchmark datasets from MoleculeNet (Wu et al., 2017). Table 1: Test AUC (%) performance on eight molecular datasets comparing our method with baselines. The best AUC-ROC values for each dataset are in **bold**. All models use same GNN architecture except those indicated by *. | | MUV | ClinTox | SIDER | HIV | Tox21 | BACE | ToxCast | BBBP | Avg | |------------------|-------|---------|-------|-------|-------|-------|--------|-------|-------| | No Pretrain | 70.7±1.8 | 58.4±6.4 | 58.2±1.7 | 75.5±0.8 | 74.6±0.4 | 72.4±3.8 | 61.7±0.5 | 65.7±3.1 | 67.2 | | MCM* [Wang et al., 2022] | 74.4±0.6 | 64.7±0.5 | 62.3±0.9 | 72.7±0.3 | 74.4±0.1 | 79.5±1.3 | 61.0±0.4 | 71.6±0.6 | 69.7 | | MGSSL [Zhang et al., 2021] | 77.6±0.4 | 77.1±4.3 | 61.6±1.0 | 75.8±0.4 | 75.2±0.6 | 78.8±0.9 | 63.3±0.5 | 68.8±0.9 | 72.3 | | Grover* [Rong et al., 2020] | 50.6±0.4 | 75.4±8.8 | 57.1±1.6 | 67.1±0.3 | 76.3±0.6 | 79.5±1.1 | 63.4±0.6 | 68.0±1.5 | 67.2 | | AttrMask [Hu et al., 2020a] | 75.8±1.0 | 73.5±4.3 | 60.5±0.9 | 75.3±1.5 | 75.1±0.9 | 77.8±1.8 | 63.3±0.6 | 65.2±1.4 | 70.8 | | ContextPred [Hu et al., 2020a] | 72.5±1.5 | 74.0±3.4 | 59.7±1.8 | 75.6±1.0 | 73.6±0.3 | 78.8±1.2 | 62.6±0.6 | 70.6±1.5 | 70.9 | | GraphMAE [Hou et al., 2022] | 76.3±2.4 | 82.3±1.2 | 60.3±1.1 | 77.2±1.0 | 75.5±0.6 | 83.1±0.9 | 64.1±0.3 | 72.0±0.6 | 73.9 | | Mole-BERT [Xia et al., 2023] | 78.6±1.8 | 78.9±3.0 | 62.8±1.1 | 78.2±0.8 | 76.8±0.5 | 80.8±1.4 | 64.3±0.2 | 71.9±1.6 | 74.0 | | JOAO [You et al., 2021] | 76.9±0.7 | 66.6±1.3 | 60.4±1.5 | 76.9±0.7 | 74.8±0.6 | 73.2±1.6 | 62.8±1.0 | 66.4±1.0 | 71.1 | | GraphLoG [Xu et al., 2021] | 76.0±1.1 | 76.7±3.3 | 61.2±1.1 | 77.8±0.8 | 75.7±0.5 | 83.5±1.2 | 63.5±0.7 | 72.5±0.8 | 73.4 | | D-SLA [Kim et al., 2022] | 76.6±0.9 | 80.2±1.5 | 60.2±1.1 | 78.6±0.4 | 76.8±0.5 | 83.8±1.0 | 64.2±0.5 | 72.6±0.8 | 73.9 | | MoAMa w/o $L_{aux}$ | 78.5±0.4 | 84.2±0.4 | 61.2±0.2 | 79.5±0.5 | 76.2±0.3 | 84.1±0.2 | 64.6±0.1 | 71.8±0.7 | 75.0 | | MoAMa | 80.0±0.8 | 85.3±2.2 | 64.6±0.5 | 79.3±0.6 | 76.5±0.1 | 80.1±0.5 | 63.0±0.4 | 72.8±0.9 | 75.3 | Validation methods and evaluation metrics In accordance with previous work, we adopt a scaffold splitting approach ([Hu et al., 2020a], [Zhang et al., 2021]). Random splitting may not reflect the actual use case, so molecules are divided according to structures into train, validation, and test sets ([Wu et al., 2017]), using a 80:10:10 split for the three sets. We use the area under the ROC curve (AUC) to evaluate the test performance of the best validation step during 10 independent runs. Model configurations For fair comparison with previous work, a five-layer Graph Isomorphism Network (GIN) with an embedding dimension of 300 was chosen for the GNN encoder. The READOUT strategy is mean pooling. During pre-training and fine-tuning, models were trained for less than 100 epochs using the Adam optimizer and a learning rate of 0.001. The batch sizes for pre-training and fine-tuning are 256 and 32, respectively. 6.2 Baselines There are two general types of baseline graph pre-training strategies that we evaluate our work against: contrastive learning tasks, such as D-SLA ([Kim et al., 2022]), GraphLoG ([Xu et al., 2021]), and JOAO ([You et al., 2021]), and attribute reconstruction, including Grover ([Rong et al., 2020]), AttrMask ([Hu et al., 2020a]), ContextPred ([Hu et al., 2020a]), GraphMAE ([Hou et al., 2022]), and Mole-BERT ([Xia et al., 2023]). Additionally, we evaluate on motif-based pre-training strategies, MGSSL ([Zhang et al., 2021]), which recurrently generates the motif tree for any molecule, and MCM ([Wang et al., 2022]), which uses a motif-based convolution module to generate embeddings. 6.3 Results We report AUC-ROC of different graph pre-training methods in Table 1. MoAMa outperforms all baseline methods on five out of eight datasets. On average, MoAMa outperforms the best baseline method Mole-BERT ([Xia et al., 2023]) by 1.3% and the best contrastive learning methods D-SLA ([Kim et al., 2022]) by 1.4%. Even without the auxiliary loss $L_{aux}$, our motif-aware masking strategy still maintains a performance improvement of 1.0%, which is still competitive with previous methods. 6.4 Ablation Studies To verify motif-aware masking parameters, we conduct ablation studies on the selection of masking distributions, reconstruction target attribute(s), reconstruction loss function, and decoder model. Study on Masking Distributions For motif-aware masking, there is the choice of masking the features of all nodes within the motif or choosing to only mask the features of a percentage of nodes within each sampled motif. For our study, we choose a motif coverage parameter to decide what percentage of nodes within each motif to mask, ranging from 25%, 50%, 75%, or 100%. Furthermore, the masking strategy utilized by previous work performs node-wise masking ([Hu et al., 2020a], [Hou et al., 2022]), where all features of a node are masked. An alternative strategy may be element-wise masking, where masked elements are chosen over all feature dimensions and implies that not all features of a node may necessarily be masked. Note that 100% masking will behave the exact same as node-wise masking, as 100% of nodes within a motif will have each feature masked. Table 2: Strategy design for motif-aware attribute masking: (1) masking distribution, (2) reconstruction target, (3) reconstruction loss, and (4) decoder model. The chosen design is highlighted. | Design Space | MUV | ClinTox | SIDER | HIV | Tox21 | BACE | ToxCast | BBBP | Avg | |-----------------------|-------|---------|-------|-------|-------|-------|---------|-------|------| | 100% Motif Coverage | 80.0±0.8 | 85.3±2.2 | 64.6±0.5 | 79.3±0.6 | 76.5±0.1 | 80.1±0.5 | 63.0±0.4 | 72.8±0.9 | 75.3 | | 75% Node-wise | 74.9±1.1 | 82.3±0.4 | 60.1±0.3 | 78.8±0.9 | 76.1±0.1 | 82.3±0.0 | 63.4±0.0 | 72.1±1.0 | 73.7 | | 75% Element-wise | 74.8±0.7 | 84.9±1.0 | 58.7±0.1 | 79.7±0.7 | 75.6±0.1 | 85.7±0.4 | 63.4±0.2 | 72.6±0.4 | 74.4 | | (1) 50% Node-wise | 76.6±1.2 | 86.4±0.6 | 58.3±0.1 | 78.1±0.3 | 75.1±0.2 | 81.9±0.3 | 64.6±0.1 | 72.7±0.1 | 74.2 | | 50% Element-wise | 73.9±0.2 | 71.2±4.0 | 61.2±0.4 | 77.5±0.8 | 74.9±0.4 | 81.1±0.7 | 62.5±0.1 | 70.6±1.8 | 71.6 | | 25% Node-wise | 76.6±1.5 | 86.3±0.7 | 62.4±0.2 | 78.4±0.2 | 75.9±0.2 | 81.8±0.1 | 65.1±0.1 | 74.7±0.2 | 75.1 | | 25% Element-wise | 75.2±1.5 | 82.1±0.4 | 58.3±0.1 | 77.8±1.5 | 75.5±0.2 | 81.5±0.2 | 63.1±0.1 | 71.6±0.3 | 73.1 | | Atom Type Chirality | 80.0±0.8 | 85.3±2.2 | 64.6±0.5 | 79.3±0.6 | 76.5±0.1 | 80.1±0.5 | 63.0±0.4 | 72.8±0.9 | 75.3 | | (2) Both w/ one decoder | 76.3±1.8 | 75.1±0.9 | 59.8±0.5 | 77.9±0.1 | 76.6±0.1 | 79.8±0.5 | 63.8±0.2 | 73.8±0.7 | 72.9 | | Both w/ two decoders | 76.2±1.4 | 74.4±1.1 | 62.4±0.9 | 78.2±1.1 | 75.5±0.6 | 82.1±0.4 | 64.3±0.2 | 72.9±0.2 | 73.3 | | | 75.9±0.9 | 81.5±1.0 | 60.5±0.1 | 78.5±0.9 | 75.8±0.2 | 82.0±0.1 | 63.7±0.3 | 73.4±0.3 | 73.9 | | Scaled Cosine Error | 80.0±0.8 | 85.3±2.2 | 64.6±0.5 | 79.3±0.6 | 76.5±0.1 | 80.1±0.5 | 63.0±0.4 | 72.8±0.9 | 75.3 | | (3) Cross Entropy | 78.8±1.1 | 84.5±0.7 | 65.4±0.2 | 78.6±0.4 | 76.3±0.1 | 82.4±0.2 | 62.9±0.5 | 72.3±0.2 | 75.1 | | Mean Squared Error | 80.0±0.5 | 84.1±1.4 | 64.6±0.5 | 78.3±0.4 | 76.8±0.2 | 80.5±0.6 | 62.8±0.3 | 71.8±0.6 | 74.9 | | GNN decoder MLP decoder | 80.0±0.8 | 85.3±2.2 | 64.6±0.5 | 79.3±0.6 | 76.5±0.1 | 80.1±0.5 | 63.0±0.4 | 72.8±0.9 | 75.3 | | | 78.8±0.5 | 85.2±0.1 | 65.5±0.3 | 78.1±0.8 | 76.2±0.2 | 82.1±0.6 | 62.8±0.8 | 71.7±0.4 | 75.1 | We provide the predictive performance within Table 2. The predictive performance for the node-wise masking outperforms the element-wise masking for both 25% and 50% node coverage. At 75% coverage, element-wise masking outperforms node-wise. However, the full coverage masking strategy outperforms all other masking strategies, due to the hardness of the pre-training task, which enables greater transfer of inter-motif knowledge. Study on Reconstruction Targets The choice of attributes to reconstruct for GNNs towards molecular property prediction has traditionally been atom type (Hu et al., 2020a; Hou et al., 2022). However, there are other choices for reconstruction that could be explored. We verify the choice of reconstruction attributes by comparing the performance of the baseline model against models trained by reconstructing only chirality, both atom type and chirality using two separate decoders, or both properties using one unified decoder. From Table 2, we note that predicting solely atom type yields the best pre-training results. The second best strategy was to predict both atom type and chirality using two decoders. In this case, the loss of the two decoders are independent, leading to the conclusion that the chirality prediction task is ill-suited to be the pre-training task. Because choice of chirality is limited to four extremely imbalanced outputs, the useful transferable knowledge may be significantly lesser than that of atom prediction, which, for the ZINC15 dataset, has nine types. Study on Reconstruction Loss Functions For the pretraining task, we have three choices of error functions to calculate training loss. A standard error function used for masked autoencoders within computer vision (He et al., 2022; Zhang et al., 2022; Germain et al., 2015) is the cross-entropy loss, whereas previous GNN solutions utilize mean squared error (MSE) (Hu et al., 2020b; Park et al., 2019; Salehi & Davulcu, 2019; Wang et al., 2017). GraphMAE (Hou et al., 2022) proposed that cosine error could mitigate sensitivity and selectivity issues: $$L_{\text{rec}} = \frac{1}{|\mathcal{V}_{\text{MASK}}|} \sum_{v \in \mathcal{V}_{\text{MASK}}} \left(1 - \frac{\mathbf{X}_v^T \mathbf{H}_v}{||\mathbf{X}_v|| \cdot ||\mathbf{H}_v||}\right)^{\gamma}, \gamma \geq 1.$$ This equation is called the scaled cosine error (SCE). $\mathbf{H}$ are the reconstructed features, $\mathbf{X}$ are the ground-truth node features, and $\gamma$ is a scaling factor ($\gamma = 1$). We investigate the effect these different error functions have on downstream predictive performance in Table 2 and find that SCE outperforms CE and MSE, in accordance with previous work. Study on Decoder Model Choices We follow the GNN decoder settings from previous work (Hou et al., 2022) to conduct our study to determine which decoder leads to better downstream predictive performance. In Table 2, we show that our method outperforms the MLP-decoder strategy, which support previous work that show MLP-based decoders lead to reduced model expressiveness because of the inability of MLPs to utilize the high number of embedded features (Hou et al., 2022). 6.5 INTER-MOTIF INFLUENCE ANALYSIS In Table 3, we report the two InfRatio and three MRR measurements for our model and several baselines. A higher influence ratio indicates that inter-motif nodes have a greater effect on the target Table 3: Measurements of inter-motif knowledge transfer using pre-trained models. A higher ratio is preferred for the InfRatio measurements, and a lower score is preferred for the MRR measurements. | Model | Avg Test AUC | InfRatio_node ↑ | InfRatio_graph ↑ | MRR_node ↓ | MRR_graph ↓ | MRR_motif ↓ | |------------|--------------|-----------------|-----------------|------------|-------------|-------------| | AttrMask | 70.8 | 0.70 | 0.44 | 0.66 | 0.64 | 0.51 | | MGSSL | 72.3 | 0.60 | 0.38 | 0.77 | 0.75 | 0.64 | | GraphLoG | 73.4 | 0.79 | 0.50 | 0.61 | 0.59 | 0.48 | | D-SLA | 73.8 | 0.76 | 0.49 | 0.67 | 0.66 | 0.44 | | GraphMAE | 73.9 | 0.76 | 0.48 | 0.64 | 0.61 | 0.49 | | Mole-BERT | 74.0 | 0.66 | 0.42 | 0.72 | 0.70 | 0.59 | | MoAMa | **75.3** | **0.80** | **0.51** | **0.59** | **0.55** | **0.41** | Figure 2: Inter-motif knowledge transfer score by motif count. A higher $MRR_{\text{inter}}^{(n)}$ score denotes greater inter-motif knowledge transfer. The relatively low values indicate that the intra-motif node influence is still highly important for the pre-training task, but our method demonstrates the highest inter-motif knowledge transfer amongst the baselines. We see that there is a small positive correlation between the average test AUC for each model and the InfRatio measurements, which supports our claim that greater inter-motif knowledge transfer leads to higher predictive performance. For the MRR measurements, our method boasts the lowest scores, which indicates less intra-motif knowledge dependence and greater inter-motif knowledge transfer. For the sake of clear visualization, we define an inter-motif score which indicates inter-motif knowledge transfer according to the number of motifs $n$ within a graph: $$MRR_{\text{inter}}^{(n)} = 1 - \frac{1}{\sum_{(V,E) \in G^{(n)}} |V|} \sum_{(V,E) \in G^{(n)}} \sum_{v \in V} \frac{1}{\text{rank}_v}. \quad (16)$$ Figure 2 shows that our method outperforms all other models in terms of inter-motif knowledge transfer as shown by the higher $MRR_{\text{inter}}^{(n)}$ scores across different motif counts. Additionally, the inter-motif knowledge transfer using our method becomes more pronounced on graphs with higher numbers of motifs. 7 CONCLUSIONS In this work, we introduced a novel motif-aware attribute masking strategy for attribute reconstruction during graph model pre-training. This motif-aware masking strategy outperformed existing methods that used random attribute masking, and achieved competitive results with the state-of-the-art methods because of the explicit transfer of long-range inter-motif knowledge and intra-motif structural information. We quantitatively verify the increase in inter-motif knowledge transfer of our strategy over previous works using inter-motif node influence measurements. REFERENCES Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications, 2021. Austin Atsango, Nathaniel L. Diamant, Ziqing Lu, Tommaso Biancalani, Gabriele Scalia, and Kangway V. Chuang. A 3d-shape similarity-based contrastive approach to molecular representation learning, 2022. Dávid Bajusz, Anita Rácz, and Károly Héberger. Why is tanimoto index an appropriate choice for fingerprint-based similarity calculations? Journal of Cheminformatics, 7, 2015. Rees Chang, Yu-Xiong Wang, and Elif Ertekin. Towards overcoming data scarcity in materials science: unifying models and datasets with a mixture of experts framework, 2022. Nick Craswell. Mean Reciprocal Rank, pp. 1703–1703. Springer US, Boston, MA, 2009. ISBN 978-0-387-39940-9. doi: 10.1007/978-0-387-39940-9_488. URL https://doi.org/10.1007/978-0-387-39940-9_488 Jörg Degen, Christof Wegscheid-Gerlach, Andrea Zaliani, and Matthias Rarey. On the art of compiling and using ‘drug-like’ chemical fragment spaces. ChemMedChem, 3, 2008. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Vijay Prakash Dwivedi, Ladislav Rampášek, Mikhail Galkin, Ali Parviz, Guy Wolf, Anh Tuan Luu, and Dominique Beaini. Long range graph benchmark, 2023. Jean MJ Frechet. Functional polymers and dendrimers: reactivity, molecular architecture, and interfacial energy. Science, 263(5154):1710–1715, 1994. Zijie Geng, Shufang Xie, Yingce Xia, Lijun Wu, Tao Qin, Jie Wang, Yongdong Zhang, Feng Wu, and Tie-Yan Liu. De novo molecular generation via connection-aware motif mining. arXiv preprint arXiv:2302.01129, 2023. Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution estimation, 2015. Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry, 2017. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners, 2021. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000–16009, 2022. Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, C. Wang, and Jie Tang. Graphmae: Self-supervised masked graph autoencoders. Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks, 2020a. Ziniu Hu, Yuxiao Dong, Kuansan Wang, Kai-Wei Chang, and Yizhou Sun. Gpt-gnn: Generative pre-training of graph neural networks, 2020b. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. Spanbert: Improving pre-training by representing and predicting spans, 2020. Dongki Kim, Jinheon Baek, and Sung Ju Hwang. Graph self-supervised learning with accurate discrepancy learning. ArXiv, abs/2202.02989, 2022. Thomas N. Kipf and Max Welling. Variational graph auto-encoders, 2016.
Dc4rXq3HIA
The consistency loss implies that each head is trained on all the domains with different weights. This is similar to the case where each head is trained on all the domains with a weighted combination of the losses of all the domains. For the predictor $f^{(d)}$, the loss is in this form: $l(\frac{\sum_j^{N^{tr}} a_{dj} f^{(d)}(x^j)}{\sum_j^{N_{tr}} a_{dj}})$. Would this loss also help the learning of domain-specific models?
IMPROVING DOMAIN GENERALIZATION WITH DOMAIN RELATIONS Huaxiu Yao\textsuperscript{1,2*}, Xinyu Yang\textsuperscript{3*}, Xinyi Pan\textsuperscript{4}, Shengchao Liu\textsuperscript{5}, Pang Wei Koh\textsuperscript{6}, Chelsea Finn\textsuperscript{1} \textsuperscript{1}Stanford University, \textsuperscript{2}UNC-Chapel Hill, \textsuperscript{3}CMU, \textsuperscript{4}UCLA, \textsuperscript{5}Caltech, \textsuperscript{6}University of Washington huaxiu@cs.unc.edu, xinyuya2@andrew.cmu.edu, cbfinn@cs.stanford.edu ABSTRACT Distribution shift presents a significant challenge in machine learning, where models often underperform during the test stage when faced with a different distribution than the one they were trained on. This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on, and propose a new approach called D\textsuperscript{3}G. Unlike previous methods that aim to learn a single model that is domain invariant, D\textsuperscript{3}G leverages domain similarities based on domain metadata to learn domain-specific models. Concretely, D\textsuperscript{3}G learns a set of training-domain-specific functions during the training stage and reweights them based on domain relations during the test stage. These domain relations can be directly obtained and learned from domain metadata. Under mild assumptions, we theoretically prove that using domain relations to reweight training-domain-specific functions achieves stronger out-of-domain generalization compared to the conventional averaging approach. Empirically, we evaluate the effectiveness of D\textsuperscript{3}G using real-world datasets for tasks such as temperature regression, land use classification, and molecule-protein binding affinity prediction. Our results show that D\textsuperscript{3}G consistently outperforms state-of-the-art methods. 1 INTRODUCTION Distribution shift is a common problem in real-world applications (Gulrajani and Lopez-Paz [2021], Koh et al. [2021a]). When the test distribution differs from the training distribution, machine learning models often experience a significant decline in performance. In this paper, we specifically focus on addressing domain shifts, which arise when applying a trained model to new domains that differ from its training domains. An example of this is predicting how well a drug will bind to a specific target protein. In drug discovery, each protein is a specific domain (Ji et al. [2022]), and the binding task on each domain is expensive due to the cost of lab experiments. Thus, an open challenge is to train a robust model that can be generalized to novel protein domains, which is essential when searching for potential drug candidates that can bind to proteins associated with newly discovered diseases. To address domain shifts, prior domain generalization approaches mainly learn a single model that is domain invariant (Arjovsky et al. [2019], Krueger et al. [2021a], Li et al. [2018b], Sun and Saenko [2016], Yao et al. [2022b]), and differ in techniques they use to encourage invariance. These methods have shown promise, but there remains significant room for improvement under real-world domain shifts such as those in the WILDS benchmark (Koh et al. [2021b]). Unlike learning a single domain-invariant model, we posit that models may perform better if they were specialized to a given domain. The advantage of learning multiple domain-specific models is that traditional domain generalization methods assume predictions are based solely on "causal" or general features. However, in practical scenarios, different domains can exhibit strong correlations with non-general features, and domain-specific models can exploit these features to make more accurate predictions. While there are clearly some possible benefits to learning domain-specific models, it remains unclear how to construct a domain-specific model for a new domain seen at test time, without any training data for that domain. To resolve this challenge, we propose a novel approach called D\textsuperscript{3}G to learn a set of diverse, training domain-specific functions during the training stage, where each function corresponds to a single *Equal contribution. Work was done during Xinyu Yang’s remote internship at Stanford. domain. For each test domain, D³G leverages the domain relations to weight these training domain-specific functions and perform inference. Our approach is based on two main hypotheses: firstly, similar domains exhibit similar predictive functions, and secondly, the test domain shares sufficient similarities with some of the training domains. By capitalizing on these hypotheses, we can develop a robust model for each test domain that incorporates information about its relation with the training domains. These domain relations are derived from domain meta-data, such as protein-protein interactions or geographical proximity, in various applications. Additionally, D³G incorporates a consistency regularizer that utilizes training domain relations to enhance the training of domain-specific predictors, especially for data-insufficient domains. Through our theoretical analysis under mild assumptions, we demonstrate that D³G achieves superior out-of-domain generalization by leveraging domain relations to reweight training domain-specific functions, surpassing the performance of traditional averaging methods. To further validate our findings, we conduct comprehensive empirical evaluations of D³G on diverse datasets encompassing both synthetic and real-world scenarios with natural domain shifts. The results unequivocally establish the superiority of D³G over best prior method, exhibiting an average improvement of 10.6%. 2 PRELIMINARIES Out-of-Distribution Generalization. In this paper, we consider the problem of predicting the label \( y \in Y \) based on the input feature \( x \in X \). Given training data distributed according to \( P^{tr} \), we train a model \( f \) parameterized by \( \theta \in \Theta \) using a loss function \( \ell \). Traditional empirical risk minimization (ERM) optimizes the following objective: \[ \arg \min_{\theta \in \Theta} \mathbb{E}_{(x,y) \sim P^{tr}}[\ell(f_\theta(x), y)]. \] The trained model is evaluated on a test set from a test distribution \( P^{ts} \). When distribution shift occurs, the training and test distributions are different, i.e., \( P^{tr} \neq P^{ts} \). Concretely, following [Koh et al., 2021b], we consider a setting in which the overall data distribution is drawn from a set of domains \( D = \{1, \ldots, D\} \), where each domain \( d \in D \) is associated with a domain-specific data distribution \( P_d \) over a set \((X,Y,d) = \{(x_i, y_i, d)\}_{i=1}^{n_d}\). The training distribution and test distribution are both considered to be mixture distributions of the \( D \) domains, i.e., \( P^{tr} = \sum_{d \in D} r^{tr}_d P_d \) and \( P^{ts} = \sum_{d \in D} r^{ts}_d P_d \), respectively, where \( r^{tr}_d \) and \( r^{ts}_d \) denote the mixture probabilities in the training set and test set, respectively. We also define the training domains and test domains as \( D^{tr} = \{d \in D | r^{tr}_d > 0\} \) and \( D^{ts} = \{d \in D | r^{ts}_d > 0\} \), respectively. In this paper, we consider domain shifts, where the test domains are disjoint from the training domains, i.e., \( D^{tr} \cap D^{ts} = \emptyset \). In addition, the domain ID of training and test datapoints are available. Domain Relations and Domain Meta-Data. In this study, our objective is to address domain shift by harnessing the power of domain relations, which encapsulate the similarity or relatedness between different domains. To illustrate this concept, we focus on the protein-ligand binding affinity prediction task, where each protein is treated as an individual domain. Domains are considered related if they exhibit similar protein sequences or belong to the same protein family. To formalize these domain relations, we introduce an undirected domain similarity matrix denoted as \( A = \{a_{ij}\}_{i,j=1}^D \), where each element \( a_{ij} \) quantifies the strength of the relationship between domains \( i \) and \( j \). In this paper, we derive the domain relations by leveraging domain meta-data \( M = \{m_i\}_{i=1}^D \), which depict the distinctive properties of each domain. 3 LEVERAGING DOMAIN RELATIONS FOR OUT-OF-DOMAIN GENERALIZATION We now describe the proposed method – D³G (leveraging domain distances for out-of-domain generalization). The goal of D³G is to improve out-of-domain generalization by constructing domain-specific models. During the training phase, we employ a multi-headed network architecture where each head corresponds to a specific training domain (Figure 1(a)). This allows us to learn domain-specific functions and capture the nuances of each domain. To address the challenge of limited training data in certain domains, we introduce a consistency loss that aids in training (Figure 1(c)). Figure 1: An illustration of D³G. (a) The multi-headed architecture of D³G, where each training domain is associated with a single head for prediction. (b) The relation extraction module, where fixed relations are extracted from domain meta-data and refined through learning from the same meta-data. (c) The training stage of D³G, where \( x \) represents a single example from domain \( d \), and the loss is composed of both a supervised loss and a consistency loss. (d) The test stage, where the weighting of all training domain-specific functions is used to perform inference for each test example. During the testing phase, we construct test domain-specific models for inference by reweighing the training domain-specific models. This reweighting process takes into account the similarity between the training and test domains, allowing us to adapt the model to the specific characteristics of the test domain (Figure 1(d)). To establish domain relations, we extract information directly from domain meta-data and refine the relationships through meta-data-driven learning (Figure 1(b)). In the following sections, we delve into the details of the training and inference processes, elucidating how we construct domain-specific models and obtain domain relations. ### 3.1 Building Domain-Specific Models In this section, we present the details of D³G for learning a collection of domain-specific functions during the training stage and leveraging these functions for relational inference during the test stage. **Training Stage.** During the training phase, our approach utilizes a multi-headed neural network architecture comprising \( N^{tr} \) heads, where \( N^{tr} \) denotes the number of training domains. Given an input datapoint \((x, y)\) from domain \( d \), we denote the prediction made by the \( d \)-th head as \( f^{(d)}(x) = h^{(d)}(e(x)) \), where \( h^{(d)}(\cdot) \) represents the domain-specific head for domain \( d \), and \( e(\cdot) \) represents the feature extractor. Our objective is to minimize the predictive risk for each datapoint when using the corresponding head, ensuring accurate predictions within each domain. This is achieved by minimizing the following loss function: \[ L_{pred} = \mathbb{E}_{d \in \mathcal{D}^{tr}} \mathbb{E}_{(x,y) \sim P_d} [\ell(f^{(d)}(x), y)]. \] In certain scenarios, some training domains may contain limited data compared to the overall training set, posing difficulties in training domain-specific predictors. To address this challenge, we leverage the assumption that similar domains tend to have similar predictive functions. Building upon this assumption, we introduce a relation-aware consistency regularizer. For each example \((x, y)\) within a training domain \( d \), the regularizer incorporates domain relations to weigh the predictions generated by all training predictors, except the corresponding predictor \( f^{(d)} \). The formulation of the relation-aware consistency loss is as follows: \[ L_{rel} = \mathbb{E}_{d \in \mathcal{D}^{tr}} \mathbb{E}_{(x,y) \sim P_d} \left[ \ell \left( \frac{\sum_{j=1,j \neq d}^{N^{tr}} a_{dj} f^{(j)}(x)}{\sum_{k=1,k \neq d}^{N^{tr}} a_{dk}}, y \right) \right], \] where \( a_{dj} \) is defined as the strength of the relation between domain \( d \) and \( j \). This loss encourages the groundtruth to be consistent with the weighted average prediction obtained from all other training predictors, using the domain relations to weight their contributions. By doing so, the regularizer encourages the model to: (1) rely more on predictions made by similar domains, and less on predictions made by dissimilar domains; (2) strengthen the relations between predictors and help training predictors for domains with insufficient data. To incorporate the consistency loss into our training process, we add it to the predictive loss in equation (2) and obtain the final loss as \( L = L_{\text{pred}} + \lambda L_{\text{rel}} \), where \( \lambda \) balances these two terms. **Test Stage.** During the testing phase, D³G constructs test domain-specific models based on the same assumption that similar domains have similar predictive functions. Concretely, we weight all training domain-specific functions and perform inference for each test domain \( t \) by weighting the predictions from the corresponding prediction heads. Specifically, for each test datapoint \( x \) drawn from the test distribution \( P_t \), D³G makes a prediction as follows: \[ \hat{y} = \frac{\sum_{d=1}^{N_{tr}} a_{dt} f^{(d)}(x)}{\sum_{k=1}^{N_{tr}} a_{kt}}, \] where \( a_{dt} \) represents the strength of the relation between the test domain \( t \) and the training domain \( d \). According to equation (4), for each test domain, training domains with stronger relations play a more important role in prediction. This allows D³G to provide more accurate predictions by leveraging the knowledge from related domains. ### 3.2 Extracting and Refining Domain Relations We then discuss how to obtain the pairwise similarity matrix \( A = \{a_{ij}\}_{i,j=1}^D \) between different domains. Here, domain relations are derived from domain meta-data. For example, in drug-target binding affinity prediction task, where each protein is treated as a domain, we can use a protein-protein interaction network to model the relations between different proteins. Another scenario is, if we aim to predict the land use category using satellite images (Koh et al., 2021b), and each country is treated as a domain, we can use geographical proximity to model the relations among countries. The relation between domains \( i \) and \( j \) that is directly collected from domain meta-data is defined as \( a_{ij}^g \). Thus, the relation between domains \( i \) and \( j \) is either a relation graph in domain meta-data, or the pairwise similarity calculated from each domain’s meta-data. We define it as \( a_{ij}^l \). One potential issue with directly collecting domain relations from fixed domain meta-data is that these fixed relations may not fully reflect accurate application-specific domain relations. For example, geographical proximity can be used in any applications with spatial domain shifts, but it is hard to pre-define how strongly two nearby regions are related for different applications. To address this issue and refine the fixed relations, we propose learning the domain relations from domain meta-data using a similarity metric function. Specifically, given domain meta-data \( m_i \) and \( m_j \) of domains \( i \) and \( j \), we use a two layer neural network \( g \) to learn the corresponding domain representations \( g(m_i) \) and \( g(m_j) \). Following Chen et al. (2020), we compute the similarity between domains \( i \) and \( j \) with a multi-headed similarity layer, which is formulated as follows: \[ a_{ij}^l = \frac{1}{R} \sum_{r=1}^{R} \cos(w_r \odot g(m_i), w_r \odot g(m_j)), \] where \( \odot \) denotes the Hadamard product and \( R \) is the number of heads. The collection of learnable weight vectors \( \{w_r\}_{r=1}^R \) has the same dimension as the domain representation \( g(m_i) \) and is used to highlight different dimensions of the vectors. We use weighted sum to combine fixed and learned relations. Specifically, we define the relation between domains \( i \) and \( j \) is defined as follows: \[ a_{ij} = \beta a_{ij}^g + (1 - \beta) a_{ij}^l, \] where \( 0 \leq \beta \leq 1 \) is a hyperparameter that controls the importance of both kinds of relations. By tuning \( \beta \), we can balance the contribution of the fixed and learned relations to the relation between domains. The final domain relations are used in the consistency regularization and testing stage. To summary, the pseudocodes of training and testing stages of D³G is detailed in Alg. 1. ### 4 Theoretical Analysis In this section, we theoretically explore the underlying reasons why utilizing domain relations that are derived from domain meta-data can enhance the generalization capability to new domains. In our theoretical analysis, for an input datapoint \((x, y)\) from domain \( d \), we rearrange the predictive function presented in equation (2) as \( y = f^{(d)}(x) + \epsilon := h^{(d)}(e(x)) + \epsilon \), where \( h^{(d)}(\cdot) \) and \( e(\cdot) \) represent Algorithm 1 Training and Test Procedure of D³G Require: Training and test data, relation combining coefficient $\beta$, loss balanced coefficient $\lambda$, meta-data $\{m_d\}_{d=1}^D$ of all domains, learning rate $\gamma$ 1: /* Training stage */ 2: Initialize all learnable parameters 3: Extract fixed relations $\{a_{ij}\}_{i,j=1}^{N_{tr}}$ 4: while not converge do 5: Compute learned relations $\{a_{ij}^l\}_{i,j=1}^{N_{tr}}$ and obtain the final domain relations by equation 6 6: for each example $(x, y, d)$ do 7: Calculated supervised loss $L_{pred}$ by equation 2 8: Computed consistency loss $L_{reg}$ by equation 3 using domain relations. 9: Update learnable parameters with learning rate $\gamma$. 10: /* Test stage */ 11: for each test domain $t$ do 12: Obtain the relations between the test domain and training domains $\{a_{dt}\}_{d=1}^{N_{tr}}$ 13: for each example $(x, y, t)$ do 14: Computed the prediction $\hat{y}$ by equation 4 the head of domain $d$ and feature extractor, respectively. $\epsilon$ is a noise term which is assumed to be sub-Gaussian with a mean of 0 and a variance of $\sigma^2$. During the testing process, we adopt the assumption that the outcome prediction function $f(t)(x)$ for the test domain $t$ can be estimated using the following equation: $$\hat{f}(t)(x) = \hat{h}(t)(e(x)), \text{ where } \hat{h}(t) = \frac{\sum_{i=1}^{N_{tr}} a_{it} \hat{h}^{(i)}}{\sum_{k=1}^{N_{tr}} a_{kt}},$$ Here, $\hat{h}^{(i)}$ represents the learned head for the training domain $i$. In the case where the denominator in equation 7 is equal to 0, we define $\hat{h}(t) = 0$. To facilitate our theoretical analysis, we make the following assumptions: (1) For each domain $d$, the domain representation $Z^{(d)}$ derived from the domain meta-data $m_d$ (i.e., $Z^{(d)} = g(m_d)$) is assumed to be uniformly distributed on $[0, 1]^r$. Furthermore, it is assumed that the domain relations accurately capture the similarity between domains. Specifically, there exists a universal constant $G$ such that for all $i, j \in \mathcal{D}$, we have $\|\hat{h}^{(i)} - \hat{h}^{(j)}\|_\infty \leq G \cdot \|Z^{(i)} - Z^{(j)}\|$. (2) The relation $a_{it}$ between domains $i$ and $t$ is determined by the distance between their respective domain representations and a bandwidth $B$, defined as $a_{it} = 1\{\|Z^{(i)} - Z^{(t)}\| < B\}$; (3) For each training domain $d$, $\hat{h}^{(d)}$ is well-learned such that $\mathbb{E}[(\hat{h}^{(d)}(e(x)) - h^{(d)}(e(x)))^2] = O(\frac{C(\mathcal{H})}{n_d})$, where $C(\mathcal{H})$ is the Rademacher complexity of the function class $\mathcal{H}$. Based on these assumptions, we then have the following theorem. Theorem 4.1. Suppose we have the number of examples $n_d \gtrsim n$ for all training domains $d \in \mathcal{D}_{tr}$, where $n$ is defined as the smallest number of examples across all domains. If the loss function $\ell$ is Lipschitz with respect to the first argument, then for the test domain $t$, the excess risk satisfies $$\mathbb{E}_{(x,y) \sim P_t}[\ell(\hat{f}(t)(x), y)] - \mathbb{E}_{(x,y) \sim P_t}[\ell(f(t)(x), y)] \lesssim B + \sqrt{\frac{C(\mathcal{H})/n}{N_{tr} B^r}}.$$ The theorem above implies that by considering domain relations to bridge the gap between training and test domains, the more training tasks we have, the smaller the excess risk will be. The detailed proofs are in Appendix A.1. Building upon the results derived in Theorem 4.1, we now present a proposition that highlights the importance of obtaining a good relation matrix $A$ in enhancing out-of-domain generalization. Specifically, we compare our method with the traditional approach where all training domains are treated equally, and the similarity matrix is defined as $\tilde{A} = \{\tilde{a}_{ij}\}_{i,j=1}^D$, with each $\tilde{a}_{ij} = 1$. We compare the well-defined similarity matrix $A$ and ill-defined $\tilde{A}$ in the following proposition: Proposition 4.2. Under the same conditions as Theorem 4.1, suppose all $\tilde{a}_{ij} = 1$ and consider the function class $\mathcal{H} \in \{h : \|h^{(i)} - h^{(j)}\|_\infty \leq G \cdot \|Z^{(i)} - Z^{(j)}\| \text{ for } i, j \in \mathcal{D}\}$. Define the excess risk with similarity matrix $A$ by $R_h(\hat{f}(t), A) = \mathbb{E}_{(x,y) \sim P_t}[\ell(\hat{f}(t)(x), y; A)] - \mathbb{E}_{(x,y) \sim P_t}[\ell(f(t)(x), y; A)]$, we have $$\inf_{\hat{f}(t) \in \mathcal{H}} \sup_{h \in \mathcal{H}} R_h(\hat{f}(t), A) < \inf_{\hat{f}(t) \in \mathcal{H}} \sup_{h \in \mathcal{H}} R_h(\hat{f}(t), \tilde{A}).$$ The proposition presented above indicates that by leveraging accurate domain relations, we can achieve superior generalization performance compared to the approach of treating all training domains equally. The detailed proof of this proposition can be found in Appendix A.2. 5 EXPERIMENTS In this section, we conduct a series of experiments to evaluate the effectiveness of D³G. Here, we compare D³G with different learning strategies and categories including (i) ERM [Vapnik, 1999], (ii) distributionally robust optimization: GroupDRO [Sagawa et al., 2020], (iii) invariant learning: IRM [Arjovsky et al., 2019], IB-IRM [Ahuja et al., 2021b], IB-ERM [Ahuja et al., 2021b], V-REx [Krueger et al., 2021a], DANN [Ganin et al., 2016b], CORAL [Sun and Saenko, 2016], MMD [Li et al., 2018a], RSC [Huang et al., 2020], CAD [Ruan et al., 2022], SelfReg [Kim et al., 2021], Mixup [Xu et al., 2020], LISA [Yao et al., 2022b], MAT [Wang et al., 2022b], (iv) domain-specific learning: AdaGraph [Mancini et al., 2019], RaMoE [Bui et al., 2021], mDSDI [Bui et al., 2021], AFFAR [Qin et al., 2022], GRDA [Xu et al., 2022], DRM [Zhang et al., 2022b], LLE [Li et al., 2023], DDN [Zhang et al., 2023], TRO [Qiao and Peng, 2023]. Here, methods in categories (i), (ii), (iii) learn a universal model for all domains. Additionally, for a fair comparison, we incorporate domain meta-data as features for all baselines during the training and test stages. All hyperparameters are selected via cross-validation. Detailed setups and baseline descriptions are provided in Appendix D. 5.1 ILLUSTRATIVE TOY TASK Dataset Descriptions. Following Xu et al. (2022), we use the DG-15 dataset, a synthetic binary classification dataset with 15 domains. In each domain \(d\), a two-dimensional key point \(x_d = (x_{d,1}, x_{d,2})\) is randomly selected in the two-dimensional space, and the domain meta-data is represented by the angle of the point (i.e., \(\arctan\left(\frac{x_{d,2}}{x_{d,1}}\right)\)). 50 positive and 50 negative datapoints are generated from two Gaussian distributions \(\mathcal{N}(x_d, I)\) and \(\mathcal{N}(-x_d, I)\) respectively. In DG-15, we construct the fixed relations between domain \(i\) and \(j\) as the angle difference between key points \(x_i\) and \(x_j\), i.e., \(d_{ij}^q = \arctan\left(\frac{x_{j,2}}{x_{j,1}}\right) - \arctan\left(\frac{x_{i,2}}{x_{i,1}}\right)\). The number of training, validation, and test domains are all set as 5. We visualize the training and test data in Figure 2a and 2b. Results and Analysis. The performance of D³G on the DG-15 dataset is in Figure 2. The results highlight several important findings. Firstly, it is observed that learning a single model fails to adequately capture the domain-specific information, resulting in suboptimal performance. To gain further insights into the performance improvements, Figures 2c and 2d depict the predictions from the strongest single model learning method (GroupDRO) and D³G, respectively. GroupDRO learns a nearly linear decision boundary that overfits the training domains and fails to generalize on the shifted test domains. In contrast, D³G effectively leverages domain meta-data, resulting in robust generalization to most test domains, with the exception of those without nearby training domains. Secondly, when compared to domain-specific learning approaches such as LLE, DDN, and TRO, D³G demonstrates superior performance. This improvement can be attributed to D³G’s enhanced ability to capture and utilize domain relations, enabling it to achieve stronger generalization capabilities. 5.2 REAL WORLD DOMAIN SHIFTS Datasets Descriptions. In this subsection, we briefly describe three datasets with natural distribution shifts and Appendix E provides additional details. • TPT-48. TPT-48 is a weather prediction dataset, aiming to forecast the next 6 months’ temperature based on the previous 6 months’ temperature. Each state is treated as a domain, and the domain meta-data is defined as the geographical location. Following Xu et al. (2022), we consider two dataset splits: I. N (24) → S (24): generalizing from the 24 states in the north to the 24 states in the south; II. E (24) → W (24): generalizing from the 24 states in the east to the 24 states in the west. • FMoW. The FMoW task is to predict the building or land use category based on satellite images. For each region, we use geographical information as domain meta-data. We first evaluate D³G on spatial domain shifts by proposing a subset of FMoW called FMoW-Asia, including 18 countries from Asia. Then, we study the problem on the full FMoW dataset from the WILDS benchmark [Koh et al., 2021b] (FMoW-WILDS), taking into account shift over time and regions. Figure 2: Results of domain shifts on toy task (DG-15). Figures (a) and (b) illustrate the training and test distributions, where datapoints in circles with the same color originate from the same domain. Figures (c) and (d) show the predicted distribution of the strongest single model method (GroupDRO) and D³G. Bottom Table reports averaged accuracy over all test domains (see full table with standard deviation in Appendix F). We bold the best results and underline the second best results. • ChEMBL-STRING. In drug discovery, we focus on molecule-protein binding affinity prediction. The ChEMBL-STRING [Liu et al., 2022] dataset provides both the binding affinity score and the corresponding domain relation. Follow Liu et al. [2022] and treat proteins and pairwise relations as nodes and edges in the relation graph, respectively. The protein-protein relations and protein structures are treated as domain meta-data. Follow Liu et al. [2022], we evaluate our method using two subsets named PPI > 50 and PPI > 100. Results. We present the results of D³G and other methods in Table 1. The evaluation metrics utilized in this study were selected based on the original papers that introduced these datasets (see results with more metrics in Appendix G). The findings indicate that most invariant learning approaches (e.g., IRM, CORAL) demonstrate inconsistent performance in comparison to the standard ERM. While these methods perform well on certain datasets, they underperform on others. Furthermore, even when employing domain-specific learning approaches such as LLE, DDN, and TRO, these methods exhibit inferior performance compared to learning a universal model. These outcomes suggest that these methods struggle to effectively learn accurate domain relations, even when provided with domain meta-data (more analysis in Appendix G.3). In contrast, D³G, which constructs domain-specific models, achieves the best performance by accurately capturing domain relations. 5.3 Ablation Study of D³G In this section, we provide ablation studies on datasets with natural domain shifts to understand where the performance gains of D³G come from. Does consistency regularization improve performance? We analyze the impact of domain-aware consistency regularization. In Figure 3, we present the results on FMoW and ChEMBL-STRING of introducing consistency regularization in the setting where only fixed relations are used. According to the results, we observe better performance when introducing consistency regularization, indicating its effectiveness in learning domain-specific models by strengthening the correlations between the domain-specific functions. How do domain relations benefit performance? Our theoretical analysis shows that utilizing appropriate domain relations can enhance performance compared to simply averaging predictions from all domain-specific functions. To test this, we conducted analysis in FMoW and ChEMBL-STRING, comparing the following variants of relations: (1) no relations used; (2) fixed relations only; Table 1: Performance comparison between D³G and other baselines. Here, all baselines use domain meta-data as features. The discrepancy in performance between our results and those reported on the leaderboard for FMoW-WILDS because we incorporate domain meta-data as features for all baselines. We **bold** the best results and **underline** the second best results. | Region Shift | Region Shift | Region Shift | Region-Time Shift | Protein Shift | |--------------|--------------|--------------|-------------------|---------------| | TPT-48 (MSE ↓) | FMoW (Worst Acc. ↑) | ChEMBL-STRING (ROC-AUC ↑) | | N (24) → S (24) E (24) → W (24) | FMoW-Asia | FMoW-WILDS | PPI >50 | PPI >100 | | ERM | 0.445 ± 0.029 | 0.328 ± 0.033 | 26.05 ± 3.84% | 34.87 ± 0.41% | 74.11 ± 0.35% | 71.91 ± 0.24% | | GroupDRO | 0.413 ± 0.045 | 0.434 ± 0.082 | 26.24 ± 1.85% | 31.16 ± 2.12% | 73.98 ± 0.25% | 71.55 ± 0.59% | | IRM | 0.429 ± 0.043 | 0.262 ± 0.034 | 25.02 ± 2.38% | 32.54 ± 1.92% | 52.71 ± 0.50% | 51.73 ± 1.54% | | IB-IRM | 0.416 ± 0.009 | 0.272 ± 0.026 | 26.30 ± 1.51% | 34.94 ± 1.38% | 52.12 ± 0.91% | 52.33 ± 1.06% | | IB-ERM | 0.458 ± 0.032 | 0.273 ± 0.030 | 26.78 ± 1.34% | 35.52 ± 0.79% | 74.69 ± 0.14% | 73.32 ± 0.21% | | V-ReX | 0.412 ± 0.042 | 0.343 ± 0.021 | 26.63 ± 0.93% | 37.64 ± 0.92% | 71.46 ± 1.47% | 69.37 ± 0.85% | | DANN | 0.394 ± 0.019 | 0.515 ± 0.156 | 25.62 ± 1.59% | 33.78 ± 1.55% | 73.49 ± 0.45% | 72.22 ± 0.10% | | CORAL | 0.401 ± 0.022 | 0.283 ± 0.048 | 25.87 ± 1.97% | 36.53 ± 0.15% | 75.42 ± 0.15% | 73.10 ± 0.14% | | MMD | 0.409 ± 0.067 | 0.279 ± 0.026 | 25.06 ± 2.19% | 35.48 ± 1.81% | 75.11 ± 0.27% | 73.30 ± 0.50% | | RSC | 0.421 ± 0.040 | 0.330 ± 0.068 | 25.73 ± 0.70% | 34.59 ± 0.42% | 74.83 ± 0.68% | 72.47 ± 0.38% | | CAD | n/a | n/a | 26.13 ± 1.82% | 35.17 ± 1.73% | 75.17 ± 0.64% | 72.92 ± 0.39% | | SelfReg | n/a | n/a | 24.81 ± 1.77% | 37.33 ± 0.87% | 75.42 ± 0.42% | 72.63 ± 0.71% | | Mixup | 0.574 ± 0.030 | 0.357 ± 0.011 | 26.99 ± 1.27% | 35.67 ± 0.53% | 74.40 ± 0.54% | 71.31 ± 1.06% | | LISA | 0.467 ± 0.032 | 0.345 ± 0.014 | 26.05 ± 2.09% | 34.59 ± 1.28% | 74.30 ± 0.59% | 71.45 ± 0.44% | | MAT | 0.423 ± 0.027 | 0.291 ± 0.024 | 25.92 ± 2.83% | 35.07 ± 0.84% | 74.73 ± 0.30% | 72.07 ± 0.81% | | AdaGraph | n/a | n/a | 25.91 ± 0.59% | 35.42 ± 0.55% | 74.02 ± 0.42% | 72.10 ± 0.06% | | RaMoE | 0.372 ± 0.035 | 0.311 ± 0.060 | 26.65 ± 0.46% | 36.51 ± 0.71% | 74.99 ± 0.22% | 71.48 ± 0.49% | | mDSDI | 0.445 ± 0.027 | 0.315 ± 0.089 | 25.54 ± 0.46% | 36.35 ± 0.45% | 75.09 ± 0.47% | 71.23 ± 0.69% | | ADDAR | 0.403 ± 0.061 | 0.287 ± 0.040 | 25.87 ± 1.01% | 35.77 ± 0.70% | 74.55 ± 0.54% | 71.93 ± 0.33% | | GRDA | 0.373 ± 0.040 | 0.355 ± 0.068 | 26.57 ± 0.70% | 34.41 ± 0.42% | 75.01 ± 0.68% | 73.57 ± 0.38% | | DRM | 0.571 ± 0.038 | 0.557 ± 0.027 | 25.22 ± 2.33% | 36.39 ± 0.76% | 74.34 ± 0.48% | 72.41 ± 0.76% | | LLE | 0.603 ± 0.041 | 0.467 ± 0.047 | 26.37 ± 1.19% | 35.83 ± 1.00% | 74.01 ± 0.63% | 71.68 ± 0.61% | | DDN | 0.537 ± 0.024 | 0.601 ± 0.038 | 26.77 ± 1.72% | 35.13 ± 0.62% | 75.17 ± 0.61% | 72.71 ± 0.59% | | TRO | 0.371 ± 0.054 | 0.281 ± 0.066 | 26.87 ± 1.26% | 37.48 ± 0.55% | 74.85 ± 0.27% | 72.49 ± 0.36% | | D³G (ours) | **0.342 ± 0.019** | **0.236 ± 0.063** | **28.12 ± 0.28%** | **39.47 ± 0.57%** | **78.67 ± 0.16%** | **77.24 ± 0.30%** | (3) learned relations only; and (4) both fixed and learned relations. Our results, presented in Table 3 (Full results: Appendix II.I), first indicate that using fixed relations outperforms averaging predictions, which confirms our theoretical findings and highlights the importance of using appropriate relations. However, only using learned relations resulted in a performance that is worse than using no relations at all, indicating that it is challenging to learn relations without any informative signals (e.g., fixed relations). Finally, combining learned and fixed relations results in the best performance, highlighting the importance of using learned relations to find more accurate relations for each problem. ### 5.4 Comparison of D³G with Domain-specific Fine-tuning As stated in the introduction, a simple way to create a domain-specific model is by fine-tuning a generic model trained by empirical risk minimization (ERM) on reweighted training data using domain relations. In this section, we compare our proposed model D³G with this approach (referred to as RW-FT) and present the results in Table 2. We also include the performance of the strongest baseline (CORAL) for comparison. The results show that RW-FT outperforms ERM and CORAL, further confirming the effectiveness of using domain distances to improve out-of-distribution generalization. Additionally, D³G performs better than RW-FT. This may be due to the fact that using separate models for each training domain allows for more effective capture of domain-specific information. ### 5.5 Analysis of Relation Refinement In this section, we conduct a qualitative analysis to determine if the relations learned can reflect application-specific information and improve the fixed relations extracted from domain meta-data. Specifically, we select three countries - Turkey, Syria, and Saudi Arabia from FMoW-Asia and visualize the fixed relations and learned relations among them in Figure 4. Additionally, we visualize one multi-unit residential area from each of the three countries. We observe that although Turkey is geographically close to other two countries in the Middle East (as shown by the fixed relations), its architecture style is influenced by Europe. Therefore, the learned relations refine the fixed relations and weaken the distances between Turkey and Saudi Arabia and Syria. Table 2: Comparison between D³G with domain-specific fine-tuning. Full results: Appendix H.2 | Model | FMoW (Worst Acc. ↑) | ChEMBL (AUC ↑) | |---------|---------------------|----------------| | | Asia | WILDS | PPI>50 | PPI>100 | | ERM | 26.05% | 34.87%| 74.11% | 71.91% | | CORAL | 25.87% | 36.53%| 75.42% | 73.10% | | RW-FT | 27.03% | 36.39%| 76.31% | 74.30% | | D³G | 28.12% | 39.47%| 78.67% | 77.24% | Table 3: Comparison of using different relations. The results on FMoW and ChEMBL-STRING are reported. When no relations are used, we take the average of predictions across all domains. | Fixed relations | Learned relations | FMoW (Worst Acc. ↑) | ChEMBL (AUC ↑) | |-----------------|-------------------|---------------------|----------------| | | | Asia | WILDS | PPI>50 | PPI>100 | | ✓ | ✓ | 26.93% | 35.32%| 76.17% | 73.38% | | ✓ | ✓ | 27.43% | 39.37%| 77.66% | 76.59% | | ✓ | ✓ | 21.18% | 36.41%| 77.09% | 75.57% | | ✓ | ✓ | 28.12% | 39.47%| 78.67% | 77.24% | 6 RELATED WORK In this section, we discuss the related work from the following two categories: out-of-distribution generalization, ensemble learning, and test-time adaptation (Appendix B). Out-of-distribution Generalization. To improve out-of-distribution generalization, the first line of works aligns representations across domains to learn invariant representations by (1) minimizing the divergence of feature distributions (Long et al., 2015; Tzeng et al., 2014; Ganin et al., 2016a; Li et al., 2018b); (2) generating more domains and enhancing the consistency among representations (Shu et al., 2021; Wang et al., 2020; Xu et al., 2020; Yan et al., 2020; Yue et al., 2019; Zhou et al., 2020). Another line of works aims to find a predictor that is invariant across domains by imposing an explicit regularizer (Arjovsky et al., 2019; Ahuja et al., 2021a; Guo et al., 2021; Khezeli et al., 2021; Koyama and Yamaguchi, 2020; Krueger et al., 2021b; Koyama and Yamaguchi, 2020) or selectively augmenting more examples (Yao et al., 2022b; Gao et al.). Recent studies explored the concept of learning domain-specific models (Li et al., 2022; Pagliardini et al., 2022; Zhang et al., 2023, 2022b). However, in comparison to these approaches, D³G stands out by leveraging domain metadata, incorporating consistency regularization and domain relation refinement techniques, to more effectively capture domain relations. Notably, even when compared to these prior domain-specific learning approaches that incorporate meta-data, D³G consistently achieves superior performance, underscoring its effectiveness in domain relation modeling. Ensemble methods. Our approach is closely related to ensemble methods, such as those that aggregate the predictions of multiple learners (Hansen and Salamon, 1990; Dietterich, 2000; Lakshminarayanan et al., 2017) or selectively combine the prediction from multiple experts (Jordan and Jacobs, 1994; Eigen et al., 2013; Shazeer et al., 2017; Dauphin et al., 2017). When distribution shift occurs, prior works have attempted to solve the underspecification problem by learning a diverse set of functions with the help of unlabeled data (Teney et al., 2021; Pagliardini et al., 2022; Lee et al., 2022). These methods aim to resolve the underspecification problem in the training data and disambiguate the model, thereby improving out-of-distribution robustness. Unlike prior works that rely on ensemble models to address the underspecification problem and improve out-of-distribution robustness, our proposed D³G takes a conceptually different approach by constructing domain-specific models. 7 CONCLUSION In summary, the paper presents a novel method called D³G for tackling the issue of domain shifts in real-world machine learning scenarios. The approach leverages the connections between different domains to enhance the model’s robustness and employs a domain-relationship aware weighting system for each test domain. We evaluate the effectiveness of D³G on various datasets and observe that it consistently surpasses current methods, resulting in substantial performance enhancements. REPRODUCIBILITY STATEMENT To make sure our work can be easily reproduced, we’ve outlined the training and test steps in Algorithm 1. Our theory in Section 4 is supported by proofs in the Appendix A. We’ve also provided more information about our experiments and settings in Appendix D, along with a description of the dataset in Appendix E. Code to reproduce our results will be made publicly available. ACKNOWLEDGEMENT We thank Linjun Zhang, Hao Wang, and members of the IRIS lab for the many insightful discussions and helpful feedback. This research was supported by Apple and Juniper Networks. CF is a CIFAR fellow. REFERENCES Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Yoshua Bengio, Ioannis Mitliagkas, and Irina Rish. Invariance principle meets information bottleneck for out-of-distribution generalization. In NeurIPS, 2021a. Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Yoshua Bengio, Ioannis Mitliagkas, and Irina Rish. Invariance principle meets information bottleneck for out-of-distribution generalization. 2021b. David Alvarez-Melis and Nicolo Fusi. Geometric dataset distances via optimal transport. Advances in Neural Information Processing Systems, 33:21428–21439, 2020. Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019. Alexander Bartler, Andre Bühler, Felix Wiewel, Mario Döbler, and Bin Yang. Mt3: Meta test-time training for self-supervised test-time adaption. In International Conference on Artificial Intelligence and Statistics, pages 3080–3090. PMLR, 2022. Manh-Ha Bui, Toan Tran, A. Tran, and D.Q. Phung. Exploiting domain-specific features to enhance domain generalization. In Neural Information Processing Systems, 2021. URL https://api.semanticscholar.org/CorpusID:239015990. Yu Chen, Lingfei Wu, and Mohammed J. Zaki. Iterative deep graph learning for graph neural networks: Better and robust node embeddings. ArXiv, abs/2006.13009, 2020. Yongxing Dai, Xiaotong Li, Jun Liu, Zekun Tong, and Ling yu Duan. Generalizable person re-identification with relevance-aware mixture of experts. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16140–16149, 2021. URL https://api.semanticscholar.org/CorpusID:234777851. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In International conference on machine learning, pages 933–941. PMLR, 2017. Thomas G Dietterich. Ensemble methods in machine learning. In International workshop on multiple classifier systems, pages 1–15. Springer, 2000. David Eigen, Marc’Aurelio Ranzato, and Ilya Sutskever. Learning factored representations in a deep mixture of experts. arXiv preprint arXiv:1312.4314, 2013. Yaroslav Ganin, E. Ustinova, Hana Ajakan, Pascal Germain, H. Larochelle, François Laviolette, Mario Marchand, and Victor S. Lempitsky. Domain-adversarial training of neural networks. In J. Mach. Learn. Res., 2016a. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016b.
3PWYAlAQxv
How is the permutation period k chosen in the relaxed version of the permutation-base training method? Table 2 lists the values used in the experiments but does not provide an intuition how the parameters were chosen.
Neural Networks Trained by Weight Permutation are Universal Approximators Anonymous authors Paper under double-blind review Abstract The universal approximation property is fundamental to the success of neural networks, and has traditionally been achieved by networks without any constraints on their parameters. However, recent experimental research proposed an innovative permutation-based training method, which can achieve desired classification performance without modifying the exact values of the weights. In this paper, we prove that the permutation training method can guide a ReLU network to approximate one-dimensional continuous functions. Our numerical results under more diverse scenarios also validate the effectiveness of the permutation training method in regression tasks. Moreover, the notable observations during weight permutation suggest that permutation training can provide a novel tool for describing network learning behavior. 1 Introduction The universal approximation property (UAP) of neural networks is a cornerstone in the theoretical guarantee of deep learning, proving that even the simplest two-layer feedforward networks can accurately approximate any continuous function (Cybenko [1989], Hornik et al. [1989], Leshno et al. [1993]). This fascinating ability allows neural networks to replace critical, challenging components in existing frameworks to enhance efficiency (Mnih et al. [2013], Goodfellow et al. [2014], Kaisst et al. [2019]). Despite the extensive study in various settings, existing research on the UAP rarely imposes restrictions on the network parameters. However, in certain application scenarios, constraints posed by some specific requirements are essential (Nugent [2005], Kosuge et al. [2021a]). As a constrained scenario, Qiu & Suda [2020] empirically showed that, without altering the exact value of network weights, only permuting the initialized weight vector can achieve comparable or better performance for image classification tasks. This unique property makes the permutation-based training method attractive for specific hardware applications, such as fixed-weight accelerators (Kosuge et al. [2021a]). It can also facilitate the implementation of physical neural networks (Nugent [2005], Feldmann et al. [2021]) (see App. A for details). Motivated by this impressive result, we are intrigued to investigate whether this permutation training method still possesses UAP. In this paper, we theoretically establish the UAP of a permutation-trained network with rectified linear unit (ReLU) activation functions for any one-dimensional continuous function. To address the non-trivial challenge posed by the permutation training setting, the key idea of our proof is a four-pair construction of the step function approximators, which helps us to approach the targeted continuous function with a piecewise constant function (Stein & Shakarchi [2009]). Additionally, a processing method is proposed to eliminate the impact of the remaining weights. Moreover, our numerical experiments not only validate the theoretical results but also demonstrate the widespread existence of the UAP of permutation training in diverse initializations. The patterns observed during permutation training also highlight its potential in describing learning behavior, relating to topics like the pruning technique (Frankle & Carbin [2019]) and continual learning (Maltoni & Lomonaco [2019], Zeng et al. [2019]). We summarize the main findings of this paper below: • We prove the UAP of permutation-trained networks with equidistant initialization and pairwise random initialization to one-dimensional continuous functions. • We conduct numerical experiments of regression problems under various generalized settings, identifying the common occurrence of the UAP of permutation training. • By observing the permutation patterns, we find that permutation training could potentially serve as a new approach to describe the detailed network learning behaviors. Related works. The UAP has been extensively studied in various settings, leading to many efficient applications. It is well known that fully connected networks are universal approximators for continuous functions (Cybenko [1989], Hornik et al. [1989], Leshno et al. [1993]). Additionally, the UAP of continuous functional and operator are presented by Chen & Chen [1995], giving rise to the operator learning formalisms such as DeepONet (Lu et al. [2021]). However, the traditional UAP is suited to wide networks where the weights are freely adjusted. Our configuration is focused on a specific approach that only allows permuting weights. Permutation is crucial in deep learning and closely relates to permutation equivariant or invariant networks (Cohen & Welling [2016]), designed to learn from symmetrical data (Zaheer et al. [2017], Lee et al. [2019]). It is also evident in graph-structured data which inherently exhibit permutation invariance (Maron et al. [2019], Satorras et al. [2021]). However, these works mainly concern issues with intrinsic symmetry, while permutation training is not limited to these scenarios. As for the weight permutation attempts, Qiu & Suda [2020] empirically proposed the first (to our knowledge) weight-permuted training method, which exhibits comparable classification performance and has been practically applied as a fixed-weight accelerator (Kosuge et al. [2021a,b]). A further discussion about this method’s advantages in hardware implementation is given in App. A. Our work provides theoretical guarantees of this method and considers some regression tasks numerically. Additionally, Scabini et al. [2022] improved the initialization by rewiring neurons from the perspective of computer networks, but the training methods are unchanged. Permutation training is also closely related to the permutation symmetry and linear mode connectivity (LMC) (Frankle et al. [2020], Entezari et al. [2021]). The LMC suggests that after a proper permutation, most SGD solutions under different initialization will fall in the same basin in the loss landscape. Similarly, our permutation training also seeks a permutation to effectively improve performance. Therefore, the search algorithm utilized in LMC (Jordan et al. [2023], Ainsworth et al. [2023]) can serve as a reference for the permutation training algorithm, and vice versa. Moreover, it would be interesting to explore the LMC between different permutation training solutions. Outline. We state the main result in Sect. 2, which includes ideas to derive the main result. In Sect. 3, we provide a detailed construction of the proof. The numerical results of permutation training are presented in Sect. 4, along with the observation of permutation behavior during the training process. Finally, the conclusion is provided in Sect. 5. All formal proof of the theorems is in the Appendix. 2 NOTATIONS AND MAIN RESULTS 2.1 Neurual networks architecture We start with a two-layer feed-forward ReLU network with $N$ hidden neurons in even numbers (i.e., $N = 2n$). It has the form of a linear combination of ReLU basis functions (noted as $\text{ReLU}(z) = \max\{z, 0\}$) as $f(x) = \sum_{i=1}^{N} a_i \text{ReLU}(w_i \cdot x + b_i) + c$. Particularly, we focus on approximating one-dimensional functions, so all weights are scalars $(w_i, b_i, a_i, c \in \mathbb{R})$. Since ReLU activation is positively homogeneous (i.e., $\text{ReLU}(\gamma x) = \gamma \text{ReLU}(x)$ for all $\gamma > 0$), we consider a simplified homogeneous case with $w_i = \pm 1$, and utilize $n$ to divide the basis functions into two parts as $$\phi_i^\pm(x) = \text{ReLU}(\pm (x - b_i)), \quad i = 1, 2, ..., n,$$ where the biases $\{b_i\}_{i=1}^n$ determine the location of basis functions. Then we introduce a one-dimensional linear layer. It will be shown later that while this layer is not essential for achieving UAP, it does simplify the proof and offer practical value. The network’s output function $f^{\text{NN}}$ gives $$f^{\text{NN}}(x) = \alpha + \gamma \sum_{i=1}^{n} [p_i \phi_i^+(x) + q_i \phi_i^-(x)],$$ where $\{p_i, q_i\}_{i=1}^n$ are the coefficients of the basis functions and $\alpha, \gamma$ are scaling factors. This form corresponds to a three-layer network, where $\{p_i, q_i\}_{i=1}^n$ and $\{\alpha, \gamma\}$ are the parameters in the second hidden layer and output layer, respectively. 2.2 Weight configuration and main theorems Without loss of generality, we consider the target continuous function \( f^* \in C([0,1]) \). During the permutation training process, we hold the initial value of the second hidden layer’s weight vector \( \theta^{(n)} = (p_i, q_i)_{i=1}^n \) and only update the order relationship of its components, leading to the following configuration: the weight vector \( \theta^{(n)} \) is permuted from a predetermined vector \( W^{(n)} \in \mathbb{R}^{2n} \). We first focus on a simple scenario with equidistantly distributed location \( B^{(n)} \) and pairwise coefficients \( W^{(n)} \). The UAP of a permutation-trained network to continuous functions can be stated as follows: **Theorem 1** (UAP with a linear layer). For any function \( f^*(x) \in C([0,1]) \) and any small number \( \varepsilon > 0 \), there exists a large even number \( n \in \mathbb{Z}^+ \), and \( \alpha, \gamma \in \mathbb{R} \) for \( f^{\text{NN}} \) in Eq. (2) with equidistantly distributed \( B^{(n)} = \left(0, \frac{1}{n-1}, \ldots, 1\right) =: (b_i)_{i=1}^n \) and corresponding \( W^{(n)} = (\pm b_i)_{i=1}^n \), along with a permuted weight vector \( \theta^{(n)} = \tau(W^{(n)}) \), such that \( |f^{\text{NN}}(x) - f^*(x)| \leq \varepsilon \) for all \( x \in [0,1] \). The intuition of this result comes from the rich expressive possibility of permutation training since there are \((2n)!\) different permutations for \( W^{(n)} \). Next, we enhance the result in Theorem 1 to a purely permuted situation, suggesting the UAP can be achieved without changing \( \alpha, \gamma \) as **Theorem 2** (UAP without the linear layer). Let \( \alpha = 0, \gamma = 1 \). For any function \( f^*(x) \in C([0,1]) \) and any small number \( \varepsilon > 0 \), there exists a large even number \( n \in \mathbb{Z}^+ \) for \( f^{\text{NN}} \) in Eq. (2) with equidistantly distributed \( B^{(n)} = (b_i)_{i=1}^n \) and \( W^{(n)} = (\pm b_i)_{i=1}^n \), along with a permuted weight vector \( \theta^{(n)} = \tau(W^{(n)}) \) such that \( |f^{\text{NN}}(x) - f^*(x)| \leq \varepsilon \) for all \( x \in [0,1] \). Although Theorem 1 can be viewed as a corollary of Theorem 2, the proof process will reveal the practical usefulness of learnable \( \alpha, \gamma \) in reducing the required network width to achieve UAP. Moreover, the result can be generalized to the scenario with random initialization, which is stated as **Theorem 3** (UAP for randomly initialized parameters). Given a probability threshold \( \delta \in (0,1) \), for any function \( f^*(x) \in C([0,1]) \) and any small number \( \varepsilon > 0 \), there exists a large even number \( n \in \mathbb{Z}^+ \), and \( \alpha, \gamma \in \mathbb{R} \) for \( f^{\text{NN}}_r \) in Eq. (2) with randomly initialized \( B^{(n)}_r \sim \mathcal{U}[0,1]^n \) and pairwisely randomly initialized \( W^{(n)}_r = (\pm p_i)_{i=1}^n, p_i \sim \mathcal{U}[0,1] \), along with a permuted weight vector \( \theta^{(n)} = \tau(W^{(n)}_r) \), such that with probability \( 1 - \delta \), \( |f^{\text{NN}}_r(x) - f^*(x)| \leq \varepsilon \) for all \( x \in [0,1] \). 2.3 Proof ideas To identify the UAP of our network (2) in \( C([0,1]) \), we employ a piecewise constant function, which is a widely-used continuous function approximator (Stein & Shakarchi, 2009), and can be expressed as a summation of several step functions. Next, we demonstrate that our networks can approximate each step function. In this spirit, our constructive proof includes three steps: 1. Approach the target function \( f^* \) by a piecewise constant function \( g \); 2. Approximate each step function of \( g \) by a subnetwork of \( f^{\text{NN}} \) with permuted coefficients; 3. Annihilate the unused basis functions and coefficients of \( f^{\text{NN}} \). Thanks to the Stone-Weierstrass theorem in function approximation theory (Stone, 1948), step 1 can be achieved by dividing the range of \( f^* \) with a uniform height to construct each step functions \( f_s \) (see Fig. 1(a)). The statement is outlined below (refer to App. B for detailed definition and proof), **Lemma 1.** For any function \( f^*(x) \in C([0,1]) \) and any small number \( \varepsilon > 0 \), there is a piecewise constant function \( g(x) \) with a uniform height \( \Delta h \leq \varepsilon \), such that \( |g(x) - f^*(x)| \leq \varepsilon \) for all \( x \in [0,1] \). The execution of step 2 is inspired by the divide-and-conquer algorithm in computer science (Hopcroft et al., 1983) and the multi-grid method in numerical analysis (Hackbusch, 2013). Suppose that the piecewise constant function \( g \) in Lemma 1 is a summation of \( J \) step functions \( \{f_{s_j}\}_{j=1}^J \), we partition the basis functions \( B^{(n)} \) also into \( J \) subgroups as \( B^{(n)} = \bigcup_{j=1}^J B_j \). Each subgroup \( B_j \) contains \( b_i \) distributed over the entire domain, instead of localized \( b_i \) (see Fig. 1(b)). This allows each subgroup to approach \( f_s \) at arbitrary locations using the same construction. --- 1 Fig. 7 in App. M intuitively shows various kinds of \( f^{\text{NN}}(x) \) under different permutations. Figure 1: Main idea of the construction. (a) Approximate the continuous function $f^*$ by a piecewise constant function $g$ which is further approximated by permuted networks $\tilde{f}^{\text{NN}}$. (b) Partition of basis functions. (c) The step function approximator $\tilde{f}_s^{\text{NN}}$ constructed by four-pair of basis functions located at $b_1, b_2, b_3, b_4$. (d) Summing pseudo-copies to adjust the heights of resulting function $\tilde{f}_s^{\text{NN}}$. Then, for every subgroup $B_j$, we construct a step function approximator $\tilde{f}_{s_j}^{\text{NN}}$ to approximate $f_{s_j}$, then sum them up to approach $g$. A core technique of this construction is utilizing four pairs of basis functions $\{\pm b_i\}_{i=1}^{4}$ (shown in Fig. 1(c)), along with a one-to-one correspondence between coefficients and biases (i.e., $\{p_i, q_i\}_{i=1}^{4} = \{\pm b_i\}_{i=1}^{4}$) to meet the permutation training setting, where each coefficient is used only once. This construction can also prevent conflict between different $B_j$. It is important to note that step 3 is necessary to achieve the desired construction. A crucial challenge of permutation training is that we must assign every parameter, rather than just pick up the wanted parameters and discard the rest. Therefore, it is essential to eliminate the remaining network parameters after step 2 to prevent the potential accumulation of errors. We solve this problem by a processing method that reorganizes them into a linear function with controllable slope and intercept. To further enhance the conclusion of Theorem 1 to Theorem 2, we introduce a technique called pseudo-copy, which can achieve UAP without the linear layer. By refining the parameters distribution, several pseudo-copies $\tilde{f}_s^{\text{NN}}$ of the original approximator $\tilde{f}_s^{\text{NN}}$ can be produced with a controllable error (see Fig. 1(d)). The final height can then be adjusted by stacking these copies together, making the scale parameters $\alpha, \gamma$ in Theorem 1 removable. Extending the UAP to the random initializations is justified by that as the width increases, the parameters randomly sampled from uniform distributions become denser, thus approaching the equidistant case. Therefore, a sufficiently wide network has a high probability of finding a subnetwork that is close enough to the network with UAP in the equidistant case. Then this subnetwork can also achieve UAP due to its continuity. The remaining part of the network can be eliminated by step 3. 3 UAP OF PERMUTATION-TRAINED NETWORKS This section provides a detailed construction of the approximator with weight-permuted networks in the equidistant case, along with an estimation of the convergent rate of approximation error. The extension to the scenario with random initialization is also thoroughly discussed. 3.1 THE FOUR-PAIR CONSTRUCTION OF STEP FUNCTION APPROXIMATORS We start with the equidistant case, and consider four pairs of basis functions $\{\phi_i^\pm\}_{i=1}^{4}$ in Eq. (1) and coefficients $\{p_i, q_i\}_{i=1}^{4} = \{\pm b_i\}_{i=1}^{4}$, where $b_1 \leq b_2 \leq b_3 \leq b_4$ along with a symmetric distance $d = b_2 - b_1 = b_4 - b_3$. The step function approximator $\tilde{f}_s^{\text{NN}}$ has a piecewise linear form as $$\tilde{f}_s^{\text{NN}}(x) = \sum_{i=1}^{4} p_i \phi_i^+(x) + \sum_{i=1}^{4} q_i \phi_i^-(x).$$ \hspace{1cm} (3) To ensure a local error of the approximator, we appeal \( f_{s}^{\text{NN}} \) to be \( x \)-independent outside the interval \([b_1, b_4]\). As a result, the coefficients \( p_i, q_i \) must satisfy \( \sum_{i=1}^{4} p_i = \sum_{i=1}^{4} q_i = 0 \), which implies the correspondence between \( \{p_i, q_i\}_{i=1}^{4} \) and \( \{\pm b_i\}_{i=1}^{4} \) as \[ \begin{align*} p_1 &= -b_1, & p_2 &= +b_2, & p_3 &= +b_3, & p_4 &= -b_4, \\ q_1 &= +b_4, & q_2 &= -b_3, & q_3 &= -b_2, & q_4 &= +b_1, \end{align*} \] and the detailed expression of \( f_{s}^{\text{NN}} \) is given in Eq. (9) at App. C. Notice that \( f_{s}^{\text{NN}} \) is monotone and centrally symmetric about the point \( x = \frac{b_2 + b_3}{2} \). So the abstract value of the two constant pieces \( x < b_1 \) and \( b_4 \leq x \) are the same. Then the height \( h \) of \( f_{s}^{\text{NN}} \) gives \[ h = 2(b_1^2 - b_2^2 - b_3^2 + b_4^2) = 4(b_2 b_3 - b_1 b_4) = 4d(b_4 - b_2). \] Along with a shifting scale \( h/2 \), it can approach step function \( f_s(x) = h \chi(x - s) \) with \( s \in [b_1, b_4] \), where \( \chi(z) = 1 \) when \( z > 0 \) and \( \chi(z) = 0 \) otherwise (see Fig. 1(c)). An example is plotted in Fig. 8 in App. M. It is obvious that the error \( \| (f_{s}^{\text{NN}} + h/2) - f_s \|_{L^\infty} \) has a trivial bound \( h \). ### 3.2 Annihilate the Unused Part of the Network After constructing step function approximators, the remaining parameters must be suitably arranged to eliminate their impact. Notice that a pair of basis functions \( \phi_i^\pm \) at each location \( b_i \) are either used together or not at all. Therefore, for each unused pair of \( \phi_i^\pm \) and the corresponding coefficients \( \pm p_i \), we can form a linear function \( a_i \ell_i \), where \( \ell_i(x) := p_i \phi_i^+(x) - p_i \phi_i^-(x) = p_i x - p_i b_i \) along with a freely adjusted sign \( a_i = \pm 1 \). The goal then is to choose a proper sign \( a = \{a_i\}_{i=1}^{n} \) for each \( \ell_i \) to control \( \| S_\ell \|_{L^\infty} \) in \([0, 1]\), where \( S_\ell(x) := \sum_{i=1}^{n} a_i \ell_i(x) \) is the summed function. It can be achieved by bounding the slope \( \sum_{i=1}^{n} a_i p_i \) with respect to \( a \), which becomes a problem of organizing addition and subtraction operations within a given series to reduce the final result. The following lemma provides a solution with an upper bound related to the largest gap in the series. **Lemma 2.** For an even number \( n \) and a sequence of real numbers \( \{c_i\}_{i=1}^{n} \) with \( c_i \in [0, 1], i = 1, 2, \cdots, n \), there exists a combination of sign \( \{a_i\}_{i=1}^{n} \) with \( a_i = \pm 1 \), such that \( 0 \leq \sum_{i=1}^{n} a_i c_i \leq \Delta c \), where \( \Delta c = \max_i |c_{i+1} - c_i| \) is the largest gap between the elements in \( \{c_i\}_{i=1}^{n} \). We prove the Lemma 2 by proposing a certain processing method (refer to App. D). As the network width increases, the distribution of \( p_i \) will become more dense, causing the largest gap \( \Delta p \to 0 \), thus the error introduced by the unused part can be arbitrarily small. Notice that the only assumption of this method is the pairwise initialization of coefficients like \( (\pm p_i)_{i=1}^{n} \), enabling the extension to random initializations. Besides, it also permits generalization to deeper networks by constructing an identity function and eliminating the remaining parts. Further details can be found in App. D. ### 3.3 Approximate Piecewise Constant Functions Now we briefly discuss how to permute equidistant coefficients \( W^{(n)} \) in \( f^{\text{NN}}(x) = \sum_{j=1}^{J} f_{s_j}^{\text{NN}}(x) \) to approximate piecewise constant function \( g(x) = \sum_{j=1}^{J} a_j \Delta h \chi(x - s_j) \) in Lemma 1 with accuracy \( \varepsilon \), where \( a_j = \pm 1 \) and \( \Delta h < \varepsilon / 2 \). The detailed proof is provided in App. E. We choose \( n \) sufficiently large to ensure that every approximator \( a_j [f_{s_j}^{\text{NN}}(x) + \frac{h}{2}] \) can approximate \( f_{s_j}(x) = a_j \Delta h \chi(x - s_j) \) with error \( h \). Since the height \( h \) in Eq. (5) may not equal \( \Delta h \), a multiplying factor \( \gamma = \Delta h / h \) is needed. Similarly, the accumulated \( h/2 \) shifting in each \( f_{s_j}^{\text{NN}} \) requires another scaling parameter \( \alpha \). Then the whole approximation, along with Lemma 1, allow us to prove the Theorem 1 since \[ |f^{\text{NN}}(x) - g(x)| = \left| \alpha + \gamma \sum_{i=1}^{n} \left[ p_i \phi_i^+(x) + q_i \phi_i^-(x) \right] - g(x) \right| \leq \Delta h < \varepsilon / 2, \quad \forall x \in [0, 1]. \] Next, we achieve UAP without the scaling parameters \( \alpha, \gamma \). The shifting scale \( \alpha \) can become small enough by constructing a constant function with a similar height (see App. F). To handle the mismatch between \( h \) and \( \Delta h \), we introduce the pseudo-copy technique, which stacks \( M \) copies of \( f_{s_j}^{\text{NN}} \) to reach the height \( \Delta h = Mh \) (see Fig. 1(d)). However, the copies’ locations cannot be identical since the biases \( B^{(n)} \) are uniquely assigned. Therefore, we refine the biases \( M \)-times and partition it into \( M \) subgroups as \( B^{(Mn)} = \bigcup_{l=1}^{M} B_l \) like Fig. 1(b). The pseudo-copy \( f_{s_j}^{\text{NN}} \) is then organized on each $B_t$, respectively. Since the pseudo-copies are very close to the original one, the refined approximation error $\|f_{s_i}^{\text{NN}} - f_s\|_{L^\infty}$ can also be controlled (refer to App. G). Theorem 2 can be proved as below, which indicates that constructing pseudo-copies requires a much larger network. $$|f_{s_i}^{\text{NN}}(x) - g(x)| = \left| \sum_{i'=1}^{M_n} [p_{i'} \phi_{i'}^+(x) + q_{i'} \phi_{i'}^-(x)] - g(x) \right| \leq \Delta h < \varepsilon/2, \quad \forall x \in [0, 1]. \tag{7}$$ ### 3.4 Estimate the Approximation Rate Here we estimate the approximation rate roughly by the $L^2$ error $E_s$ of approximating single step function $f_s(x) = h \chi(x - s)$ by $f_{s_i}^{\text{NN}}(x)$. Start with our four-pair construction in Eq. (3), assume $s = (b_2 + b_3)/2$ and rewrite the relations $b_1 = s - k_2, b_2 = s - k_1, b_3 = s + k_1, b_4 = s + k_2$, where $0 < k_1 \leq k_2$, then the error of single approximator gives (see App. H for details and a similar estimation for pseudo-copies) $$e_s^2 = \left\| \left( f_{s_i}^{\text{NN}} + \frac{h}{2} \right) - f_s \right\|_{L^2}^2 = \frac{8}{3}(k_1 - k_2)^2(k_1^3 + 3k_1^2k_2 + 2k_1k_2^2 + k_2^3) \leq \frac{56}{3}d^2k_2^3. \tag{8}$$ In our step function approximator in Eq. (4), the $k_2$ can be chosen as $k_2 \sim O(d)$, which implies $e_s \sim O(d^{5/2})$. However, the height $h$ in Eq. (5) also gives $h \sim O(d^2)$. To approximate the step function $f_s$ with height $\Delta h \sim O(1)$, the number of stacked pseudo-copy must satisfy $M = \frac{\Delta h}{h} \sim O(d^{-2})$. Hence the final error is estimated as $E_s = M e_s \sim O(d^{1/2})$. Recall that $d = \frac{1}{2n-T}$, we have $E_s \sim O(n^{-1/2})$, which means the approximation rate is roughly 1/2 order with respect to the network width. We will verify this rate by the experimental results in Sect. 4. ### 3.5 Generalize to the Random Initializations In extending the UAP to the common scenario involving random initializations, the basic proof ideas remain unchanged. However, constructing of step function approximators in Eq. (3) becomes invalid because the desired basis function cannot be located accurately. Nevertheless, the randomly sampled basis functions will become more dense upon increasing width, leading to a high probability of finding basis functions that closely match the required location. Therefore, we can first apply the UAP in the equidistant case to obtain a network $f_{s_i}^{\text{NN}}$ in Eq. (2), which exhibits approximation power. Then, within a randomly initialized network of sufficient width, we find a subnetwork $f_{s_i}^{\text{sub}}$ that can be regarded as randomly perturbed from $f_{s_i}^{\text{NN}}$. If this perturbation is small enough, the subnetwork $f_{s_i}^{\text{sub}}$ will also possess approximation power. Notice that this declaration can hold for totally random coefficients $W_r^{(n)} \sim U[0, 1]^{2n}$. However, eliminating the unused parameters by the process discussed in Sect. 3.2 requires a pairwise form such as $W_r^{(n)} = (\pm p_i)_i^{n-1}$. Therefore, we restrict our result to the case in Theorem 3. The detailed proof along with an estimation of the probability introduced by randomness are given in App. I. ### 4 Experiments This section presents numerical evidence to support and validate the theoretical proof. An interesting observation of permutation behaviors also highlights the theoretical potential of this method. #### 4.1 The Algorithm Implementation of Permutation Training In the implementation of permutation training, guidance is crucial in finding the ideal order relationship of the weights. The lookahead permutation (LaPerm) algorithm proposed in Qiu & Suda (2020) introduces an $k$-times Adam-based free updating (Kingma & Ba, 2015), where the learned relationship can then serve as a reference for permuting. To ensure the performance, the weights are permuted after every $k$ epoch. Apart from the fixed permutation period $k$ chosen by Qiu & Suda (2020), we also consider a relaxed algorithm with a gradually increased $k$ to learn sufficient information for the next permutation. The impact of $k$'s value on convergence behavior is evaluated to be negligible (see App. N). See App. J for a discussion of the original and relaxed LaPerm algorithms. 4.2 Experimental Setting of Function Approximation Tasks Now we carry out experiments for some regression problems to justify our theoretical results. We consider a three-layer network in Eq. (2), where the first hidden layer’s parameters are fixed to form the ReLU basis functions \( \{\phi_i^\pm\}_{i=1}^n \) in Eq. (1), and the weights \( \theta^{(n)} \) of the second hidden layer are trained by permutation. Moreover, \( \alpha, \gamma \) in the output layer are freely trained scaling factors to reduce the required network width. All the experiments below are repeated 10 times with different random seeds, and the error bars mark the range of the maximum and minimum values. Refer to App. I for the detailed experimental environment and setting for each case. 4.3 Approximating the One-Dimensional Continuous Functions For one-dimensional cases, we utilize a 1-2n-1-1 network architecture with random initializations discussed in Theorem 3. The approximation targets are typical continuous functions \( y = -\sin(2\pi x) \) and 3-order Legendre polynomial \( y = \frac{1}{2}(5x^3 - 3x) \), where \( x \in [-1, 1] \). A more complicated case about a Fourier series with random coefficients, along with the results of the equidistant scenario, are presented in App. Q. The numerical result illustrated in Fig. 2 exhibits a clear convergence behavior upon increasing \( n \). Our relaxed LaPerm algorithm doesn’t show a significant advantage, potentially due to the preliminary attempt of exponentially increasing \( k \). This suggests a need for advanced relaxation schemes, such as a self-adjusted strategy (Qiao et al., 2011). Furthermore, the \( L^\infty \) error exhibits a 1/2 convergence rate with respect to \( n \). Although the theoretical estimation in Sect. 3 is based on \( L^2 \) norm, we indeed observe that it also holds for \( L^\infty \) error. ![Figure 2](image) Figure 2: Approximating one-dimensional continuous function (a): \( y = -\sin(2\pi x) \) and (b): \( y = \frac{1}{2}(5x^3 - 3x) \) with randomly initialized network, where \( x \in [-1, 1] \). The inset in each panel presents the target function as lines and an example of the approximation result as dots. 4.4 The Performance of Various Random Initializations Here we discuss the impact of initialization on performance, which is more crucial for permutation training due to the weights’ lack of positional flexibility. Here we utilize the case in Fig. 2(a) to consider 8 different random initialization methods. Fig. 3 shows that the UAP in permutation-trained networks is not limited in the setting considered by our theorems. The converged random cases followed the pairwise initialization outperform the equidistant scenario, demonstrating the well-known advantages of random initializations. However, some commonly used random initializations, such as Xavier’s uniform initialization \( U_X \) (Glorot & Bengio, 2010), and He’s uniform and normal initializations \( U_H \) and \( N_H \) (He et al., 2015), fail to show convergence behavior. These results emphasize the incompatibility between the existing initializations and the permutation training setting. Further insight can be found by comparing the results in pairs. We first focus on totally and pair-wisely randomly initializing \( W^{(n)} \) from uniform distribution \( U[-1, 1] \), which are labeled as case 1 and 2, respectively. Apart from the clear dominance of pairwise case 1, the total case 2 also shows a certain degree of approximation power. Next, for a randomly initialized \( B^{(n)} \), in case 3 we let \( W^{(n)} \) have a strict correspondence like the equidistant case, while in case 4 \( W^{(n)} \) is initialized separately. The almost equivalent results indicate that the correspondence between \( B^{(n)} \) and \( W^{(n)} \) in Eq. (3) may not be necessary in random cases. Moreover, we apply the standard \( U_H \) for \( W^{(n)} \) in case 5. Figure 3: The performance of randomly initialized parameters to approximate $y = -\sin(2\pi x)$, where $x \in [-1, 1]$. The pairwise random distribution of $W^{(n)} = (\pm p_i)_{i=1}^n; p_i \sim U[-1, 1]$ is noted as $W^{(n)} \sim U_{[0, 1]^n}$, and the same applies to $U_X^{[\pm]}[0, 1]^n$ and $N_H^{[\pm]}[0, 1]^n$. The error bars are omitted for conciseness. The inset panel presents the target function as lines and an example of the approximation result as dots. and also for $B^{(n)}$ in case 6. It shows that case 5 achieves the best accuracy for larger networks ($n > 320$), while case 6 exhibits unexpected deterioration, which may be attributed to the mismatch of the scale in $B^{(n)}$. Finally, the default choices $N_H$ and $U_X$ in cases 7 and 8 both yield surprisingly poor performance, underscoring the need for new initializations suitable to permutation training. 4.5 Observation of the Permutation-Active Patterns This section aims to explore the theoretical potential of permutation training in describing network learning behavior. Based on the significant correlation between permutation and learning behavior as evidenced by Qiu & Suda (2020) and our investigation, we hypothesize that the permutation-active components of the weight vector play a crucial role in the training process. Therefore, by identifying and tracing the permutation-active part of weights, a novel tool that provides insights into learning behavior can be achieved, which also facilitates visualization and statistical analysis. As a preliminary attempt, we illustrate the permutation behavior of the coefficients $\theta^{(n)}$ in Fig. 4. The components that participated in the permutation are visually highlighted in dark green. The behavior clearly demonstrated that the order relationship evolves synchronously with the learning process, agreeing with the observation in Qiu & Suda (2020). Figure 4: The permutation behavior in the first 400 permutation iteration in approximating $y = -\sin(2\pi x)$ by equidistantly initialized network with $n = 640$. (a) The distribution of the active components (denoted by dark green color). (b) The frequency distribution illustrates the variation in the total count of active components in each permutation. (c) The corresponding loss behavior. Specifically, the distribution of the active components shows significant patterns, which can be classified into four stages (marked in red dash lines in Fig. 4). The loss declines sharply in the initial stage, while only the components with medium value are permuted. Once loss reaches a plateau in the second stage, more components are involved in permutation, evidencing the role of permutation in propelling the training. As loss starts to decline again, the permutation frequency correspondingly diminishes. Interestingly, the slower loss decrease gives rise to a ribbon-like pattern, akin to the localized permutations reported by Qiu & Suda (2020). This is possibly due to slow updates failing to trigger a permutation. This observation may support the existence of inherent low-dimensional structures within the permutation training dynamics, potentially linked to mathematical depiction of permutation groups, such as cycle decomposition (Cameron, 1999) and Fourier bases for permutation (Huang et al., 2009). Finally, the permutation’s saturation stage aligns with the stationary state of loss convergence. We believe these inspiring phenomena deserve further exploration. 5 CONCLUSION AND DISCUSSION As a constrained training method, permutation training exhibits unique properties and practical potential (see App. A). To verify its efficacy, we prove the UAP of permutation-trained networks with equidistant initialization and pairwise random initialization for one-dimensional continuous functions. The key idea is a four-pair construction of step function approximators in Fig. 1, along with a processing method to eliminate the impact of the remaining parameters. Our experimental results not only confirm the theoretical declarations (see Fig. 2), but also validate the approximation power for various random initializations in Fig. 3 establishing the prevalence of the UAP of permutation training. The discovery that certain commonly used initializations fail to achieve UAP also raises an intriguing question about the systematical characterization of initializations that satisfy UAP. The generalizability of our results holds significant importance. Extending to networks equipped with leaky-ReLU can be straightforward (refer to App. G for numerical evidence). Our approach also facilitates implementations within other architectures (see App. F for detailed discussion). However, extending our results to the high-dimensional scenario still faces some theoretical challenges, although some preliminary experimental attempts have been made for two-dimensional inputs (see App. K). One potential approach is similar to the discussion in Sect. 3.5 but here we can directly seek the subnetwork as a random perturbation from the network with conventional UAP in high dimensions. To achieve this, however, the processing method in Sect. 3.2 must be generalized from pairwise to total random initializations. We plan to address this problem in future work. Our observation in Sec. 4.5 suggests that permutation training is a novel tool to shed light on network learning behavior. It corresponds well with the training process and has systematical mathematical descriptions (Cameron, 1999; Huang et al., 2009). Specifically, the patterns observed in Fig. 4 can intuitively justify some weight categorization strategies, leading to potential benefits for consolidating the crucial weights for previous tasks (Maltoni & Lomonaco, 2019), or pruning to find the ideal subnetwork in the lottery ticket hypothesis (Frankle & Carbin, 2019). Additionally, the existing permutation training algorithm can be viewed as applying an order-preserving projection from the free training results to the initial weight value, sharing the same form as weight projection methods in continual learning (Zeng et al., 2019). This work is expected to facilitate the practical applications of permutation training. However, some issues still exist and deserve further investigation. Notably, existing initializations derived from the free training situation, such as He’s normal initialization, perform poorly with permutation training in Fig. 3 emphasizing the need for developing more compatible initializations. This could pave the way to effectively training higher-dimensional and deeper networks by weight permutation, thereby meeting the practical requirements. Further, the permutation training itself also has the potential to serve as an initialization protocol (Scabini et al., 2022). The existing attempts at algorithm implementations guide the permutation by Adam-based inner loops, thus incurring undesirable external computation costs. However, if the order relationships can be learned through other time-saving approaches, such as the learn-to-rank formalism (Cao et al., 2007), or permutation search algorithms in the study of LMC (Jordan et al., 2023; Ainsworth et al., 2023), the benefits of permutation training will be actualized in practice. Importantly, our proof is independent of algorithm implementations, which is expected to inspire and motivate the development of more advanced algorithms. Overall, we believe that the UAP of permutation-trained networks underscores the profound, yet undiscovered insights into how the weight encodes the learned information, highlighting the importance of further exploration in this field. REFERENCES Samuel Ainsworth, Jonathan Hayase, and Siddhartha Srinivasa. Git Re-Basin: Merging models modulo permutation symmetries. In The Eleventh International Conference on Learning Representations, 2023. Zhiqiang Cai, Jingshuang Chen, and Min Liu. Least-squares ReLU neural network (LSNN) method for linear advection-reaction equation. Journal of Computational Physics, 443:110514, 2021. Peter J Cameron. Permutation groups. Number 45. Cambridge University Press, 1999. Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th international conference on Machine learning, pp. 129–136, 2007. Shiyi Chen and Gary D Doolen. Lattice boltzmann method for fluid flows. Annual review of fluid mechanics, 30(1):329–364, 1998. Tianping Chen and Hong Chen. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE Transactions on Neural Networks, 6(4):911–917, 1995. Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pp. 2990–2999. PMLR, 2016. George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314, 1989. Rahim Entezari, Hanie Sedghi, Olga Saukh, and Behnam Neyshabur. The role of permutation invariance in linear mode connectivity of neural networks. In International Conference on Learning Representations, 2021. Johannes Feldmann, Nathan Youngblood, Maxim Karpov, Helge Gehring, Xuan Li, Maik Stappers, Manuel Le Gallo, Xin Fu, Anton Lukashchuk, Arslan Sajid Raja, et al. Parallel convolutional processing using an integrated photonic tensor core. Nature, 589(7840):52–58, 2021. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In Proceedings of the International Conference on Learning Representations, 2019. Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. Linear mode connectivity and the lottery ticket hypothesis. In International Conference on Machine Learning, pp. 3259–3269. PMLR, 2020. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. JMLR Workshop and Conference Proceedings, 2010. Ian Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In International conference on machine learning, pp. 1319–1327. PMLR, 2013. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. In Advances in Neural Information Processing Systems, volume 27, 2014. Wolfgang Hackbusch. Multi-grid methods and applications, volume 4. Springer Science & Business Media, 2013. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034, 2015. John E Hopcroft, Jeffrey D Ullman, and Alfred Vaino Aho. Data structures and algorithms, volume 175. Addison-wesley Boston, MA, USA., 1983.
U9sHVjidYH
The assumptions made in the theoretical part are not necessarily valid for the real scenarios of transformers with softmax (instead of hardmax) and learned queries & keys (they are not necessarily independent).
On the Efficiency of Transformers: The Effect of Attention Rank Anonymous authors Paper under double-blind review Abstract Transformers have showcased superior performance across a variety of real-world scenarios, marking the advent of a period dominated by large language models. However, with the escalating complexity of these models and the continuous enlargement of training datasets, efficiency-related challenges have become more pronounced. In this study, we investigate the influence of the rank of attention matrices on the training and performance of these models. We first gain insight by benchmark tasks such as BERT and GPT-2. Our findings underscore that (i) the mean rank of attention matrices is stable throughout the training, and the initial rank is a dependable indicator of the final rank; (ii) a distinct positive relationship exists between the attention rank and the effectiveness of the model, where elevated ranks correlate with diminished loss and expedited convergence. These insights reveal a relationship between initial attention matrix rank and performance. We proceed to investigate the impact of hyperparameters on the initial rank. The study unveils that modifying the softmax temperature or the head dimension can amplify the ranks, with the former exerting a more significant effect. Notably, we theoretically identify the characterization in the attention matrix rank at low temperatures, and we demonstrate the existence of an upper bound of attention matrix rank with respect to the head dimension. These observations are validated through trials on a high-rank target, underscoring instances where traditional setups fall short. 1 Introduction In recent years, Transformer-based neural network models have reshaped the landscape of machine learning, demonstrating unparalleled success across a myriad of applications including natural language processing (NLP) [Vaswani et al., 2017; Devlin et al., 2019; Raffel et al., 2020; Radford et al., 2018; Rae et al., 2021; Dehghani et al., 2023; Touvron et al., 2023; Liu et al., 2019; Hao et al., 2020; Liu et al., 2021; Yuan et al., 2022], computer vision (CV) [Chen et al., 2021b; Wang et al., 2022; Liang et al., 2021; Lu et al., 2022; Zhu et al., 2021; Wang et al., 2021], audios [Sung et al., 2022; Tsimpoukelli et al., 2021; Li et al., 2022], interdisciplinary sciences [Jumper et al., 2021], and so on. Their core architecture module, anchored by the so-called attention mechanism, has been proved to be a cornerstone particularly in capturing linguistic relationships with intricacies and nuances, thereby driving the current NLP renaissance and leading to the new era of large foundation models represented by ChatGPT and GPT-4. However, in the meantime, with this rise in prominence comes pressing challenges. As model architectures burgeon in complexity and training data swells in volume, the looming issue of efficiency becomes highly inescapable [Shen et al., 2023]. In this study, we carry out a thorough investigation on the rank properties of the Transformer model. Mathematically, the central attention mechanism is designed to weigh the significance and correlations of input tokens via, e.g., inner products between trainable transformations on inputs, which is often formulated as attention matrices. As a fundamental concept in algebra, the matrix rank is supposed to impact the capacity (expressive ability) and learning performance of the attention mechanism and hence Transformer models. However, although there are findings on the low-rank bottleneck [Kanai et al., 2018; Bhojanapalli et al., 2020; Dong et al., 2021; Lin et al., 2022], and several Transformer-based variants to reduce the computational and memory bottlenecks of modeling long sequences from the perspective of attention rank [Chen et al., 2021a; Wang et al., 2020; Hu... the effect of attention ranks has been overshadowed by other model intricacies to a large extent. In this regard, one may pose the following fundamental questions: 1. Intuitively, both the attention rank and model performance vary during the training process. How do they affect and evolve with each other? 2. What is the influence of hyperparameters configuration in attention matrices on the attention rank, and hence the model performance and parameter efficiency? 3. Can these findings and insights provide tutorial guidance in practical applications? We develop principled results on the first and second questions via systematic experiments and rigorous mathematical analysis and make further steps on the third question by numerical verifications under toy but representative scenarios. The primary contributions can be summarized as follows: 1. **Stability of attention rank**: We present empirical evidence on real-world NLP benchmarks showcasing that the rank of attention matrices remains almost constant during the training process. This makes the initial attention rank an applicable and convenient measure of the attention rank along with training. 2. **Connections between attention rank and performance**: Our findings illuminate a direct and positive correlation between the attention rank and model performance. Specifically, a higher rank usually leads to expedited convergence and decreased loss, especially when learning high-rank targets. Combined with Point 1, this emphasizes the importance of the initial rank. Consequently, these insights not only streamline the model design process as well as the hyperparameter selection but also contribute to conserving valuable computational resources. 3. **Effect of softmax temperature and head dimension on attention rank**: We provide a fine-grained analysis of factors affecting the (initial) attention rank. Notably, it is shown that while both the softmax temperature and head dimension play a role, the impact of temperature is much more pronounced. 4. **Theoretical demonstration**: Under the setting of reduced temperatures, we perform rigorous mathematical analysis on the rank of attention matrices. The results (i) establish an upper bound on the (initial) attention rank, suggesting the existence of low-rank limits; (ii) imply a model reduction effect corresponding to parameter efficiency. That is, it is sufficient for the attention rank to reach saturation given a relatively small head dimension. 5. **Validations**: We numerically verify the results under a controlled but representative setting, where challenges that may be encountered in real-world tasks are mainly emphasized via target ranks. This validation underscores the applicability and robustness of our findings and insights. The rest of this paper is organized as follows. In Section 2, we discuss the related work centering around the attention rank. Section 3 provides pivotal findings in real-world applications (BERT and GPT-2 on benchmark datasets). Section 4 includes the fine-grained mathematical analysis on the attention rank. In Section 5, we perform numerical verifications to validate our results and insights. All the details of proofs can be found in the appendix. **Notations.** Throughout this paper, we use normal letters to denote scalars, particularly the letters \( n, d, i, j, k \) to represent positive integers. Boldface lower-case/capital letters are reserved for vectors/matrices. Let \( \|x\|_p := \left( \sum_{i=1}^n x_i^p \right)^{1/p} \) be the \( \ell^p \)-norm for any \( x \in \mathbb{R}^n \) and \( p \in [1, \infty] \). Denote the standard basis of \( \mathbb{R}^n \) by \( \{e_i\}_{i=1}^n \), where \( e_i \) is the vector of all zeros except that the \( i \)-th position is 1. Let \( 0_n \in \mathbb{R}^n \) be the vector of all zeros. Let \( [n] := \{1, 2, \ldots, n\}, n \in \mathbb{N}_+ \). For a probability space \( (\Omega, \mathcal{F}, P) \), denote the probability of a measurable event \( E \in \mathcal{F} \) by \( P(E) \). Let \( \mathcal{N}(\mu, \Sigma) \) be the multivariate normal distribution defined on \( \mathbb{R}^n \), where \( \mu \in \mathbb{R}^n \) is the expectation and \( \Sigma \in \mathbb{R}^{n \times n} \) is the covariance. We use the big Omega notation \( f(n) = \Omega(g(n)) \) to represent that \( f \) is bounded below by \( g \) asymptotically, i.e., there exists \( c > 0, n_0 \in \mathbb{N}_+ \) such that \( f(n) \geq cg(n) \) for any \( n \geq n_0 \). 2 RELATED WORK The exploration of the rank of the Transformer attention matrix has been a focus in previous research (Kanai et al., 2018; Bhojanapalli et al., 2020; Dong et al., 2021; Lin et al., 2022). Bhojanapalli et al. (2020) unveiled a restriction associated with the low-rank bottleneck in attention heads, attributed to the proportional relationship between the number of heads and the size of each head in prevailing architectures. Dong et al. (2021) introduced an innovative perspective of interpreting self-attention networks. Their study elucidated that the networks’ output is an amalgamation of lesser components, or pathways. In the absence of skip connections and multi-layer perceptrons (MLPs), they established that the output gravitates towards a rank-1 matrix at a doubly exponential rate. On the other hand, a suite of Transformer-based adaptations (Chen et al., 2021a; Wang et al., 2020; Hu et al., 2022; Guo et al., 2019; Lin et al., 2022) has emerged to mitigate the inherent bottlenecks, notably computational and memory constraints. For instance, Wang et al. (2020) ascertained that the self-attention mechanism’s complexity is reducible, attributing this to its low-rank matrix approximation. The innovative self-attention mechanism they introduced marked a reduction in complexity. Meanwhile, Guo et al. (2019) incorporated low-rank constraints, a modification that manifested improvements in specific tasks. In a parallel vein, Chen et al. (2021a) noted the prowess of sparse and low-rank approximations in distinct scenarios. Their efficacy was found to be contingent on the softmax temperature in attention, with a combined sparse and low-rank approach superseding individual performances. In the context of our research, a meticulous analysis of attention matrices’ rank and its bearing on model efficiency and performance is conducted. We establish that the mean rank remains consistent throughout the training, positioning the initial rank as an accurate predictor of the end rank. Furthermore, a clear linkage is discerned between increased attention ranks and a reduction in loss and accelerated convergence, especially for high-rank targets. Delving into the impact of different configurations on the initial rank, we observe that both the softmax temperature and head dimension ($d_h$) adjustments lead to augmented ranks. The softmax temperature adjustments are particularly prominent. At lower temperatures, a unique attention matrix rank pattern emerges. Our theoretical insights, corroborated by experimental assessments on an optimized model, accentuate the limitations inherent in traditional configurations, underscoring the pivotal role of these parameters. 3 ATTENTION RANK IN BERT AND GPT-2 In the realm of Natural Language Processing (NLP), transformer-based models have risen to prominence, with the self-attention mechanism being instrumental in their ascendancy by offering enhanced handling of sequential data. We first delve into an analytical comparison of two renowned models, BERT and GPT-2, with a particular focus on attention matrix rank. The rank of a matrix, a concept central to linear algebra, serves as a critical element in our analysis, offering insights into the amount of distinct information encapsulated within the matrix. In the context of transformer models, understanding the rank of the attention matrix is crucial, as it potentially correlates with the model’s performance. 3.1 FORMULATIONS To delve deeper into the intricacies of this mechanism, we commence with its mathematical formulations. A transformer consists of several interconnected transformer blocks. For each head $h \in \{1, 2, \cdots, H\}$ in each block, we have $$V^h = XW^h_v, \quad K^h = XW^h_k, \quad Q^h = XW^h_q,$$ where $W^h_v, W^h_k, W^h_q \in \mathbb{R}^{d \times d_h}$, and $d_h$ is the head dimension with $d_h = d/H$. We denote the input by $X \in \mathbb{R}^{n \times d}$. The attention (score) matrix is formulated as $$\text{Attn}^h = \text{softmax}\left(\frac{Q^h(K^h)^T}{T}\right),$$ where $T > 0$ is the temperature, and is typically assigned the value $\sqrt{d_h}$ in most applications. The subsequent output is $$O_{\text{attn}} = \text{LN}(\text{Concatenate}_{[H]}(\text{Attn}^h V^h) W_o + X)$$ (3) with $W_o \in \mathbb{R}^{d \times d}$ and $\text{LN}$ as the layer normalization. This yields the final output $$O_{\text{output}} = \text{FFN}(O_{\text{attn}}),$$ (4) where $\text{FFN}$ is the feedforward neural network. From the formulation, we find that the head dimension $d_h$ and the temperature $T$ largely dictate the rank of attention matrices. We will show that these hyperparameters play a vital role in determining both the representational prowess and efficiency of the attention mechanism and transformer model later. ### 3.2 Experiments **Task.** Our experiments were initiated with a focus on the key parameters $d_h$ and $T$, owing to their critical role in the rank of the attention matrix. We employed BERT and GPT-2 models for this purpose. The core aim was to investigate how alterations to $d_h$ and $T$ would influence the rank in training dynamics and, subsequently, on the overall performance of these BERT and GPT-2 models. The IMDB and Wiki datasets served as the data for our training processes. **Model and hyper-parameters.** We utilized transformer models composed of 6 layers, with an embedding size of 256. Training batches were made up of 8 samples and employed a learning rate of $5 \times 10^{-5}$. In order to evaluate the rank, we chose four random samples each from the training and testing datasets and computed the average attention rank across all transformer blocks and heads. We also monitored the variance between head and data samples to gain insights into the attention mechanism’s stability and consistency.\footnote{It should be noted that the training of GPT-2 necessitated the masking of attention matrices to preclude the model from accessing future data. However, this mask was not applied during the rank evaluation.} Visual representations of our experimental outcomes are delineated in Figures 1 - 4, offering an illustrative analysis that aids in comprehensively understanding the effects of variations in $d_h$ and $T$ on the rank in the training dynamics and final performance. ![Figure 1: BERT on IMDB. From left to right: training rank, validation rank, and validation loss.](image1) ![Figure 2: BERT on Wiki. From left to right: training rank, validation rank, and validation loss.](image2) 3.3 Discussions The graphical representations and data derived from our experiments yield several key insights into the behavior of the attention matrix rank. We distill our primary observations as follows: 1. **Rank Stability Throughout Training**: One salient observation is the stable nature of the attention matrix rank during the training process. Across diverse experiments, the rank exhibits minimal fluctuations, underscoring that the initial rank profoundly impacts subsequent training phases. This stability in both the training and validation phases accentuates the critical role of model initialization. 2. **Role of Temperature (T)**: Alterations in $T$ markedly impact the initial rank of the attention matrix. Across all values of $d_h$, an increase in $T$ correlates with a higher rank. A distinct divergence is observed between the rank curves corresponding to $T = 0.001$ and $T = 1$, highlighting the pivotal role of temperature in shaping the matrix’s initial structure and, by extension, its representational capabilities. This underscores the imperative of judiciously choosing the $T$ value. 3. **Effect of Head Dimension ($d_h$)**: In contrast to $T$, variations in $d_h$ exert a less pronounced impact on the attention matrix rank. A slight elevation in rank is observed when $d_h$ ascends from 32 to 64. However, this elevation becomes marginal when $d_h$ escalates from 64 to 128, despite the absolute increment in $d_h$ being larger, especially when $T$ is larger. This implies a diminishing return in rank enhancement as $d_h$ increases and underscores that the influence of $d_h$ is comparatively subdued relative to $T$ variations. 4. **Association with Model Performance**: A higher attention matrix rank correlates with enhanced model efficacy, as manifested by reduced validation loss. Models conditioned with $T = 1$ consistently eclipse those conditioned with $T = 0.001$, registering lower validation losses. Interestingly, the disparity in attention matrix ranks for different $d_h$ values (32, 64, 128) is negligible, and the performance of the corresponding models is almost analogous. These observations are invariant across multiple experiments, lending credence to the universality of these insights. 4 Fine-Grained Theoretical Analysis As is shown in Figures 1–4 and discussed in Section 3.3, the attention rank largely determines the overall model performance, with a higher initial rank leading to reduced loss and faster convergence. In addition, compared to the head dimension, the softmax temperature has a much more pronounced impact on the (initial) attention rank. This section provides a fine-grained analysis of the low-temperature case associated with the “hardmax” activation, illustrating the existence of the low-rank barrier and model reduction effect. Formulations. Let \( X := [x_1, x_2, \ldots, x_n]^T \in \mathbb{R}^{n \times d} \) be the input data, where \( n \) denotes the sequence length and \( d \) is the input dimension. Let \( (K, Q) = (XW_k, XW_q) \) be the key-query pair with trainable parameters \( \theta := (W_k, W_q) \in \mathbb{R}^{d \times d_h} \times \mathbb{R}^{d \times d_h} \) (\( d_h \) is the head dimension), i.e., \( K := [k_1, k_2, \ldots, k_n]^T \in \mathbb{R}^{n \times d_h}, Q := [q_1, q_2, \ldots, q_n]^T \in \mathbb{R}^{n \times d_h} \) with \( k_i^T = x_i^T W_k, q_i^T = x_i^T W_q, i = 1, 2, \ldots, n \). The basic form of the self-attention (score) matrix is defined as \[ \text{Attn}(X; \theta) := \text{softmax} \left( \frac{QK^T}{T} \right) = \text{softmax} \left( \frac{XW_q W_k^T X^T}{T} \right), \] where \( T := T(n, d, d_h) > 0 \) is the temperature. By convention, for any \( A = [a_{ij}] \in \mathbb{R}^{n \times n} \), \[ e_i^T \text{softmax}(A)e_j := \frac{\exp(a_{ij})}{\sum_{j=1}^n \exp(a_{ij})} \quad \text{with } \{e_i\}_{i=1}^n \text{ as the standard basis of } \mathbb{R}^n. \] For the low-temperature case (\( 0 < T \ll 1 \)), (5) is approximately \[ \text{hardmax} \left( \frac{XW_q W_k^T X^T}{T} \right). \] See Lemma 1 in the appendix for further details. Here, the maximum is taken in a row-wise sense: for a matrix \( A = [a_{ij}] \in \mathbb{R}^{n \times n}, e_i^T \text{hardmax}(A) := e_{k_i} \) with \( k_i := \arg \max_{j \in [n]} a_{ij} \). We have the following estimate on the rank of (6). **Theorem 1.** Let the parameters \( W_q, W_k \) be Gaussian random matrices, i.e., the entries of \( W_q, W_k \) are independent \( \mathcal{N}(0, 1) \) random variables. Assume that the input data \( X \) satisfies \( XX^T = I_n \). Then for any \( n \in \mathbb{N}_+ \) appropriately large, we have \[ \mathbb{E}_{W_q} [\text{rank} \left( \text{hardmax} \left( \frac{XW_q W_k^T X^T}{T} \right) \right)] \leq (1 - \exp(-1))n + 1 \approx 0.63n. \] **Proof sketch.** The theorem is proved via the following procedure: 1. The orthonormal input yields the independence across different rows of \( XW_q W_k^T X^T \) (Lemma 4 in the appendix), further implying that these rows are i.i.d. as \( \mathcal{N}(0_n, QK^T) \) for any fixed (Gaussian random) \( W_k \). 2. The hardmax calculation within respective rows is reduced to an elementary birthday problem (Lemma 3 in the appendix), which gives the estimate on the number of columns with all zeros; 3. The estimate is further analyzed via the infinitesimal order estimation (Lemma 2 in the appendix) and the AM-GM inequality (suggesting that “=” holds if and only if all probabilities are equal). **Remark 1.** The assumption on input data seems strong at first glance. However, this assumption is approximately reasonable in applications where different \( x_i \)'s (corresponding to different tokens, for example) are often embedded with independent (and isotropic) Gaussian vectors. According to Vershynin (2018) (Lemma 3.2.4 and Remark 3.2.5), \( x_i \)'s tend to be almost orthogonal in high dimensions (\( d \gg 1 \)) after proper scaling (e.g., normalization). **Remark 2.** Notice that the hardmax(\(\cdot\)) operator is scaling-invariant w.r.t. positive constants, i.e., \( \text{hardmax}(cA) = \text{hardmax}(A) \) for any \( c > 0 \). Theorem 1 also holds when the input sequences are not normalized. --- 2The hardmax activation is occasionally used in applications for computational efficiency. See CV examples in Elsayed et al. (2019); Papadopoulos et al. (2021) for more details. 3That is, the input sequence is orthonormal across time steps. The model reduction effect. In fact, the above rank (LHS of (7)) reaches saturation when increasing the head dimension \(d_h\), provided an appropriate scaling (e.g., \(1/\sqrt{d_h}\)). Informally, recall that the rows of \(XW_q W_k^\top X^\top\) are independent and identically distributed as \(\mathcal{N}(0_n, KK^\top)\), according to the Johnson–Lindenstrauss lemma (Johnson & Lindenstrauss, 1984). We have \[ e_i^\top KK^\top e_j = k_i^\top k_j = x_i^\top W_k W_k^\top x_j \\ \approx x_i^\top x_j \Rightarrow KK^\top \approx XX^\top, \quad \text{when } d_h = \Omega(\log n). \] That is, further increasing the head dimension after \(d_h = \Omega(\log n)\) has limited effect on the rows’ distribution (always approximately \(\mathcal{N}(0_n, XX^\top)\) only depending on \(n, d\)), and hence on the rank of hardmax attention. This is the model reduction effect: selecting the critical configuration \(d_h = \Omega(\log n)\) achieves optimal efficiency, since further increasing parameters leads to diminishing marginal utility. 5 Numerical Verifications Building on the insights from our previous experiments and theoretical results, we established that the rank of the attention matrix is crucial in determining the model’s overall performance. Notably, the initial rank predominantly governs the rank throughout the training process, underscoring the significance of model initialization. Now, we aspire to validate our theoretical analysis results and verify the impact of attention rank on model performance in a more controlled data environment. 5.1 Initial Attention Rank Task. We first focus on the attention matrix at initialization. Recall that \[ \text{Attn} = \text{softmax}\left(\frac{XW_q(XW_k)^\top}{T}\right), \] where \(X \in \mathbb{R}^{n \times d}\) and the elements of \(X, W_q,\) and \(W_k\) are drawn from a \(\mathcal{N}(0, 1)\) distribution. Our aim is to explore and understand the behavior of the attention matrix’s rank at different values of the temperature parameter \(T\) and the dimensionality \(d_h\). Model and Hyperparameters. For our assessment, we set \(n = 100\) and \(d = 256\). We test \(d_h\) at values \{8, 16, 32, 64, 128\} and \(T\) across a logarithmic scale from \(10^{-4}\) to \(10^3\). Employing singular value decomposition, we ascertain the matrix rank, treating near-zero singular values as zero with a threshold of \(10^{-8}\). In order to bolster the reliability of our findings, we calculate the matrix rank for each \(d_h\) and \(T\) combination over three trials, and compute the mean and standard deviation. This systematic approach ensures a comprehensive analysis, providing insights that are both robust and replicable. Results are presented in Figure 5. Figure 5 reveals salient trends concerning the matrix rank, temperature parameter \(T\), and diverse \(d_h\) values. Significantly, the matrix rank demonstrates acute sensitivity to temperature \(T\). For all \(d_h\) values, an increase in \(T\) leads to a marked rise in matrix rank, eventually achieving full rank. This pattern suggests that matrices at higher temperatures generally maintain a superior effective rank, emphasizing the pivotal role of \(T\) in the attention mechanism. Elevated temperatures result in a wider attention distribution, potentially amplifying the attention matrix’s rank. For temperatures below 10, the matrix primarily manifests a low-rank property, implying limited expressiveness. Intriguingly, for all \(d_h\) values, minimal rank variance is observed in the attention matrix around \(T = 10\) and at particularly low \(T\) values (around \(10^{-4}\)). At these exceedingly low \(T\) values, the behavior resembles hardmax. Hence, from a rank perspective, conditions with \(T < 10\) mirror the hardmax scenario. Beyond \(T = 10\), the rank rapidly surges with increasing \(T\), ultimately reaching full rank. In contrast, although \(d_h\) determines the maximum achievable rank for the \(W_q\) and \(W_k\) matrices, and it indirectly regulates the overall rank of the attention matrix, the impact of \(d_h\) on rank is far less significant than that of \(T\). Observing settings with very low temperatures, we find that (i) there exists an upper bound for the attention rank, which is consistent with our theoretical estimate. Figure 5: The variation of (initial) attention rank for different $d_h$, $T$ values, where error bars represent the standard deviation. (approximately 0.63$n$); and (ii) when the head dimension $d_h$ is not too large (much smaller than $n$), the attention rank has reached saturation, aligning with our theoretical predictions as well. The error bars, symbolizing the standard deviation, convey inherent variations. Yet, these variations are eclipsed by the predominant patterns. In most instances, the standard deviation is trivial, signifying a consistent trend. This consistency emphasizes the dependability of the identified patterns and trends. Our findings elucidate the intricate interplay between temperature $T$ and $d_h$ in determining the attention matrix’s effective rank. Such insights could offer invaluable guidance for model architecture design, guiding choices about the optimal temperature and number of attention heads to enhance performance for particular tasks or datasets. **Remark 3.** It is noteworthy that while the trend of rank variation with $T$ concurs with earlier observations in GPT-2 and BERT, where the rank ascends with increasing $T$, in our simplified setting, the maximum rank can achieve full rank. In contrast, in GPT-2 and BERT experiments, full rank is never attained. This discrepancy arises from the inherently low-rank characteristic of real text data, marked by frequent appearances of common words like “a” and ”the.” Additionally, the $T$ scale observed in our experiments differs from that in GPT-2 and BERT, as these models’ actual training employed Kaiming initialization instead of the standard $\mathcal{N}(0, 1)$ initialization. ### 5.2 Experiments on High-Rank Data **Task.** In our pursuit to delve deeper into the effects of the attention matrix’s rank on the model’s expressive capability, a simplified experiment was conducted in a more controlled data environment, emphasizing the influence of its initial rank on the model’s performance on high-rank signals. We constructed a high-rank dataset that closely mirrors our conditions of interest. The sequence $X$ is consistently composed of 100 characters. Characters were randomly and uniformly selected from a predefined set to ensure a diverse array of sequences. The sequence $Y$ is formulated using the equation $Y = XP$, where $P$ is a $100 \times 100$ matrix, and each row in $P$ contains a single ’1’, with all other elements set to ’0’. We fixed the rank of $P$ at 80 to simulate a high-rank target, yielding a dataset comprising 5000 data pairs $(X, Y)$. **Model and Hyperparameters.** Our model begins with an embedding layer that transposes the input into a dense vector space, followed by a transformer block that encapsulates key elements like multi-head self-attention, position-wise feed-forward networks, skip connections, and layer normalization. We set $n = 100$ and $d = 256$. We test $d_h$ at values $\{32, 64, 128\}$ and $T$ at values... Table 1: Performance across different hyperparameter configurations and varied attention ranks. | $T$ | $d_h$ | Final Loss | Initial Rank | Final Rank | Accuracy | |-----|-------|------------|--------------|------------|----------| | 1 | 32 | 0.52 | 54.29 ± 0.44 | 52.25 ± 0.44 | 0.94 | | 1 | 64 | 0.72 | 57.70 ± 0.40 | 56.88 ± 0.40 | 0.93 | | 1 | 128 | 0.91 | 59.90 ± 0.61 | 59.05 ± 0.61 | 0.92 | | 100 | 32 | 0.00 | 95.62 ± 0.06 | 95.38 ± 0.06 | 1.00 | | 100 | 64 | 0.00 | 91.40 ± 0.29 | 90.17 ± 0.29 | 1.00 | | 100 | 128 | 0.06 | 85.35 ± 2.05 | 78.25 ± 2.05 | 0.99 | | 1000| 32 | 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 1.00 | | 1000| 64 | 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 1.00 | | 1000| 128 | 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 1.00 | According to Figure 5, different values of $T$ result in different initial attention matrix ranks. Specifically, with $T = 1$ (the usual setting), the initial attention matrix rank is less than the data’s rank (80), while with $T = 100$ and $T = 1000$, the initial attention matrix rank exceeds the data’s rank (80). The forward pass of the model culminates in a linear layer that delivers the final predictions, ensuring that the output is aligned with the expected results. Optimization is facilitated by the Adam optimizer, with a learning rate of 0.003. We gauge the model’s evolution through the cross-entropy loss, a conventional metric for classification tasks. The experiment is characterized by an embedding size of 256, a total of 50 training epochs, and a batch size of 200. The findings are illustrated in Table 1. 5.3 DISCUSSIONS Our refined model unveiled insights that affirm our experimental discoveries. First, the influence of $T$ and $d_h$ on the final rank is consistent with their impact on the initial rank, as observed in Figure 5. This consistency further illustrates the stable rank of the attention matrix throughout training. For example, at $T = 1$ and $d_h = 32$, the rank slightly adjusts from $54.29 \pm 0.44$ to $52.25 \pm 0.44$. A similar steadiness is reflected in other settings, consistent with our experimental observations on GPT-2 and BERT. Further, we find a discernible correlation between the attention matrix rank and the model’s overall performance. Setups where the rank exceeds the target rank (80) witness a significant enhancement in model efficacy. Specifically, setups with $T = 100$ and $T = 1000$ achieve an accuracy approaching 1.00, markedly superior to those at $T = 1$. Interestingly, variations in $d_h$ yield almost imperceptible performance differences, highlighting the attention matrix rank’s potential as a powerful early indicator for assessing model efficacy. 6 CONCLUSION This study examined the stability of attention rank and its association with model performance and efficiency. Our empirical results are bolstered by comprehensive mathematical analysis and numerical validation. Initially, we emphasized the consistent behavior of attention rank throughout training. Additionally, we found that the initial attention rank plays a pivotal role in determining the ultimate performance of the model. Moreover, we identified the substantial impact of softmax temperature and head dimension on attention rank, with temperature having a more dominant effect. These observations are crucial for enhancing model performance and efficiency. Future research will expand on these insights, delving further into the intricacies of attention mechanisms and revealing potential avenues for advanced applications. REFERENCES Srinadh Bhojanapalli, Chulhee Yun, Ankit Singh Rawat, Sashank Reddi, and Sanjiv Kumar. Low-rank bottleneck in multi-head attention models. In *International Conference on Machine Learning*, pp. 864–873. PMLR, 2020. Beidi Chen, Tri Dao, Eric Winsor, Zhao Song, Atri Rudra, and Christopher Ré. Scatterbrain: Unifying sparse and low-rank attention. *Advances in Neural Information Processing Systems*, 34: 17413–17426, 2021a. Chun-Fu Richard Chen, Quanfu Fan, and Rameswar Panda. Crossvit: Cross-attention multi-scale vision transformer for image classification. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 357–366, 2021b. Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme Ruiz, Matthias Minderer, Joan Puigcerver, Utku Evci, Manoj Kumar, Sjoerd Van Steenkiste, Gamaleldin Fathy Elsayed, Aravindh Mahendran, Fisher Yu, Avital Oliver, Fantine Huot, Jasmijn Bastings, Mark Collier, Alexey A. Gritsenko, Vighnesh Birodkar, Cristina Nader Vasconcelos, Yi Tay, Thomas Mensink, Alexander Kolesnikov, Filip Pavetic, Dustin Tran, Thomas Kipf, Mario Lucic, Xiaohua Zhai, Daniel Keysers, Jeremiah J. Harmsen, and Neil Houlsby. Scaling vision transformers to 22 billion parameters. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), *Proceedings of the 40th International Conference on Machine Learning*, volume 202 of *Proceedings of Machine Learning Research*, pp. 7480–7512. PMLR, 23–29 Jul 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Volume 1 (Long and Short Papers)*, pp. 4171–4186, 2019. Yihe Dong, Jean-Baptiste Cordonnier, and Andreas Loukas. Attention is not all you need: Pure attention loses rank doubly exponentially with depth. In *International Conference on Machine Learning*, pp. 2793–2803. PMLR, 2021. Gamaleldin Elsayed, Simon Kornblith, and Quoc V. Le. Saccader: Improving accuracy of hard attention models for vision. *Advances in Neural Information Processing Systems*, 32, 2019. Qipeng Guo, Xipeng Qiu, Xiangyang Xue, and Zheng Zhang. Low-rank and locality constrained self-attention for sequence modeling. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 27(12):2213–2222, 2019. Boran Hao, Henghui Zhu, and Ioannis Paschalidis. Enhancing clinical bert embedding using a biomedical knowledge base. In *Proceedings of the 28th International Conference on Computational Linguistics*, pp. 657–661, 2020. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In *International Conference on Learning Representations*, 2022. William B. Johnson and Joram Lindenstrauss. Extensions of Lipschitz maps into a Hilbert space. *Contemp. Math.*, 26(189-206):2, 1984. John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Highly accurate protein structure prediction with AlphaFold. *Nature*, 596(7873):583–589, 2021.
3FJOKjooIj
Why replace the heterogeneous encoder with GCN to implement the proposed method on homogeneous graph datasets? What happens if the heterogeneous encoder is replaced with other similar encoders, e.g., GAT?
Self-Supervised Heterogeneous Graph Learning: A Homophily and Heterogeneity View Yujie Mo\textsuperscript{1,2} Feiping Nie\textsuperscript{3} Ping Hu\textsuperscript{1} Heng Tao Shen\textsuperscript{1} Zheng Zhang\textsuperscript{4*} Xinchao Wang\textsuperscript{2*} Xiaofeng Zhu\textsuperscript{1*} \textsuperscript{1}School of Computer Science and Engineering, University of Electronic Science and Technology of China \textsuperscript{2}National University of Singapore \textsuperscript{3} School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University \textsuperscript{4}School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen Abstract Self-supervised heterogeneous graph learning has achieved promising results in various real applications, but it still suffers from the following issues: (i) meta-paths can be employed to capture the homophily in the heterogeneous graph, but meta-paths are human-defined, requiring substantial expert knowledge and computational costs; and (ii) the heterogeneity in the heterogeneous graph is usually underutilized, leading to the loss of task-related information. To solve these issues, this paper proposes to capture both homophily and heterogeneity in the heterogeneous graph without pre-defined meta-paths. Specifically, we propose to learn a self-expressive matrix to capture the homophily from the subspace and nearby neighbors. Meanwhile, we propose to capture the heterogeneity by aggregating the information of nodes from different types. We further design a consistency loss and a specificity loss, respectively, to extract the consistent information between homophily and heterogeneity and to preserve their specific task-related information. We theoretically analyze that the learned homophilous representations exhibit the grouping effect to capture the homophily, and considering both homophily and heterogeneity introduces more task-related information. Extensive experimental results verify the superiority of the proposed method on different downstream tasks. 1 Introduction Heterogeneous graph learning aims to extract and uncover meaningful hidden patterns in the heterogeneous graph, such that it outputs discriminative representations for different tasks \cite{Dong2017, Sun2021}. To alleviate the issue of limited labels in real scenarios, self-supervised heterogeneous graph learning (SHGL) has received increasing attention across diverse applications, such as social network analysis and recommendation systems \cite{Chen2022, Xie2022}. Existing SHGL methods typically employ meta-paths to extract semantic relationships among nodes of the same type in the heterogeneous graph, as illustrated in Figure 1(a). Consequently, such a process treats the heterogeneous graph as a composition of homogeneous graphs based on meta-paths \cite{Wang2023, Mo2023a}. This actually mines the homophily (\textit{i.e.}, connectivity and information aggregation among nodes within the same class) in the heterogeneous graph, as two nodes connected by meta-path generally tend to belong to the same class. For instance, in an academic heterogeneous graph with several node types (\textit{e.g.}, author, paper, and subject), if there exists a meta-path “paper-author-paper” between two papers (\textit{i.e.}, two papers are written by the same author), these two papers possibly belong to the same class. As a result, previous SHGL methods utilize meta-paths to explore the homophily, increasing the intra-class correlation and benefiting downstream tasks, as shown in Figure 1(b). *Corresponding authors: X. Zhu, X. Wang, and Z. Zhang Figure 1: Example and studies of meta-path-based graphs in previous SHGL. (a) For an academic heterogeneous graph, most previous SHGL employs meta-paths (e.g., P-A-P and P-S-P) to establish connections between two papers, and then ignores nodes from other types (e.g., Author) in meta-paths. (b) Intra-class correlation and node classification results (i.e., Micro-F1) by GCN (Kipf & Welling, 2017) on meta-path-based homogeneous graphs with different homophily ratios (HR, i.e., the ratio of nodes connected by meta-path belong to the same class). The higher the HR of the meta-path-based graph, the higher the intra-class correlation, thus benefiting the classification performance. However, existing SHGL methods still have limitations that need to be addressed. On the one hand, meta-paths are manually defined and require expert knowledge to select appropriate meta-paths for different tasks (Lv et al., 2021). Moreover, employing meta-paths to extract the relationships among nodes incurs considerable computation costs, which exponentially increase with the meta-path length. On the other hand, most previous SHGL methods overlook or cannot effectively utilize the heterogeneity (i.e., connectivity and information aggregation among nodes from different types) in the heterogeneous graph, which may carry significant information relevant to downstream tasks. Take the same academic heterogeneous graph as an example. If two authors have the same name in the same institution, it is difficult to distinguish these two authors. In contrast, if we consider the heterogeneity (e.g., each author’s published papers), we can easily distinguish them. As a result, previous SHGL methods may lose significant task-related information associated with the heterogeneity. Based on the above analysis, it is feasible to consider both homophily and heterogeneity in the heterogeneous graph without pre-defined meta-paths to improve the effectiveness of SHGL. To achieve this, there are at least two challenges to be solved, i.e., (i) capturing the homophily in the heterogeneous graph without relying on meta-paths; and (ii) effectively utilizing both homophily and heterogeneity in the heterogeneous graph, despite their inherent conflict. In this paper, to address the above challenges, we discard traditional meta-paths and propose a novel SHGL framework to capture both Homophily and hEROgeneity in the heterogeneous graph (HERO for short), as shown in Figure 2. Specifically, we obtain the closed-form solution of the self-expressive matrix to capture the homophily from the subspace and nearby neighbors and obtain homophilous representations, thus tackling Challenge (i). Meanwhile, we employ a heterogeneous encoder to aggregate the information of nodes from different types to capture the heterogeneity and thereby obtain heterogeneous representations. With homophilous and heterogeneous representations, we further design a consistency loss and a specificity loss to capture the invariance between them and preserve their respective task-related information in the latent space, respectively, thus tackling Challenge (ii). Finally, in theoretical analysis, the learned homophilous representations are proved to capture the homophily in the heterogeneous graph, while homophilous and heterogeneous representations are fused to introduce more information related to downstream tasks. Compared to previous SHGL methods, our main contributions can be summarized as follows: - We make the first attempt to understand the self-supervised heterogeneous graph learning without pre-defined meta-paths from the view of homophily and heterogeneity. - We propose to comprehensively capture the homophily from both the subspace and nearby neighbors as well as to discard pre-defined meta-paths that require expert knowledge. We further extract consistent and specific information between homophilous and heterogeneous representations to introduce more task-related information, thus achieving effectiveness. Figure 2: The flowchart of the proposed HERO. Specifically, HERO first employs the Multi-Layer Perception as encoder $g_\phi$ and learns a self-expressive matrix $S^*$ to capture the homophily and obtain homophilous representations $Z$. Meanwhile, HERO employs a heterogeneous encoder $f_\theta$ to aggregate the information of nodes from different types to obtain heterogeneous representations $\tilde{Z}$. After that, HERO designs a consistency loss $L_{con}$ and a specificity loss $L_{spe}$ to extract the consensus between $Z$ and $\tilde{Z}$ as well as to maintain their distinct information in different latent spaces, respectively. - We theoretically demonstrate that the learned homophilous representations have the grouping effect, thus capturing the homophily. Furthermore, we theoretically demonstrate that considering both homophily and heterogeneity introduces more task-related information than considering them individually, thus benefiting downstream tasks. - We experimentally demonstrate the superiority of the proposed method in terms of different downstream tasks on both heterogeneous graph datasets and homogeneous graph datasets, compared to numerous comparison methods. 2 Method Notations. Let $G = (V, E, X, T, R)$ represent a heterogeneous graph, where $V = \{v_i\}_{i=1}^{N}$ and $E$ indicate nodes set and edges set, respectively, and $N$ indicates the number of nodes. $X = \{x_i\}_{i=1}^{N}$ denotes the node features matrix, while $T$ and $R$ indicate node types set and edge types set, respectively. Given the heterogeneous graph, the meta-path used in previous SHGL methods can be defined in the form of $v_1 \xrightarrow{r_1} v_2 \xrightarrow{r_2} \cdots \xrightarrow{r_s} v_{s+1}$. It is a sequence of a composite relation $r_1 \circ r_2 \circ \cdots \circ r_s$ between node $v_1$ and node $v_{s+1}$, where $s$ indicates the length of meta-path and $\circ$ denotes the composition operator. Many previous SHGL methods explore the homophily in the heterogeneous graph with different meta-paths, as shown in Figure 1(a). In contrast, the proposed method mines both homophily and heterogeneity in the heterogeneous graph without pre-defined meta-paths, as shown in Figure 2, and we introduce the details of the proposed method as follows. 2.1 Homophily In this paper, we propose to adaptively learn the homophily in the heterogeneous graph without meta-paths. Actually, the homophily extraction in the heterogeneous graph aims at establishing connections and conducting information aggregation among nodes within the same class. Considering the learning scenario without labels, previous studies [Chapelle et al., 2002; Zhou et al., 2003] show two statements: (i) nodes in the same subspace are likely to belong to the same class; and (ii) nearby nodes are likely to belong to the same class. Hence, there are at least two ways for the homophily extraction in the heterogeneous graph. The first way is to connect and aggregate the information of nodes in the same subspace, while the second way is to connect and aggregate the information of each node and its nearby neighbors. In this paper, we first employ the Multi-Layer Perceptron (MLP) as the encoder \( g_\phi \) to obtain the \((l+1)\)-th layer node representations \( H^{(l+1)} \) by: \[ H^{(l+1)} = \sigma(H^{(l)}W^{(l)}), \] where \( \sigma \) is the activation function, \( W^{(l)} \) indicates the trainable parameters of \( g_\phi \), and \( H^{(0)} = X \). After that, we propose to capture the homophily in the subspace with a self-expressive matrix \( S \in \mathbb{R}^{N \times N} \) that linearly describes every node by all nodes, i.e., \[ H^{(l+1)} = SH^{(l+1)} + O^{(l+1)}, \] where \( O^{(l+1)} \) is a noise matrix. In Eq. (2), the representation of the \( i \)-th node \( h_i^{(l+1)} \) can be represented by \( h_i^{(l+1)} = s_{i1}h_1^{(l+1)} + \ldots + s_{iN}h_N^{(l+1)} \). In particular, the larger the weight (i.e., the value of \( s_{ij} \)), the higher the probability of the node \( v_i \) replaced by the node \( v_j \). According to the subspace-preserving property [He et al., 2003; Vidal, 2009], each node and its corresponding nodes with large weights are likely to fall in the same subspace, so they are likely to belong to the same class. Therefore, the self-expressive matrix describes each node by the nodes (i.e., with large weights) in the same subspace to capture the homophily. However, the self-expressive matrix may also introduce the nodes from different classes (i.e., with small weights) into the subspace to degrade the model performance. A good solution is to push the weight of nodes from different classes to be as small as possible. Based on the second statement, we propose to encourage the self-expressive matrix to focus more on the neighbors of each node in the original feature space and less on its faraway nodes that may come from different classes. To do this, we first calculate the feature distance matrix \( D \in \mathbb{R}^{N \times N} \) between all node pairs, where \( d_{ij} = \| x_i - x_j \|_2^2 \), and then treat the value less than the threshold as 0 to obtain a sparse feature distance matrix. Based on the above analysis, each node \( v_i \) and its neighbors set \( N_i \) (\( \forall v_j \in N_i, d_{ij} = 0 \)) should be further assigned with large weights in the self-expressive matrix while the weights of faraway nodes should be penalized. To achieve this, we propose to capture the homophily from both the subspace and nearby neighbors by: \[ \min_S \| H^{(l+1)} - SH^{(l+1)} \|_F^2 + \alpha \sum_{i,j=1}^{N} d_{ij}s_{ij} + \beta \sum_{i,j=1}^{N} s_{ij}^2, \] where \( \alpha \) and \( \beta \) are non-negative parameters to trade off three terms. In Eq. (3), the second term enables the self-expressive matrix to focus on the nearby neighbors (i.e., with small feature distance) of each node to capture the homophily. Moreover, the second term penalizes the weights among nodes from different classes induced by the first term, and the first term captures part connections within the same class missed by the second term, thus complementing each other. The third term aims to regularize the self-expressive matrix \( S \) to avoid the trivial solution. Actually, Eq. (3) takes the closed-form solution \( S^* \) as follows: \[ S^* = (H^{(l+1)}(H^{(l+1)})^T - \frac{\alpha}{2}D)(H^{(l+1)}(H^{(l+1)})^T + \beta I_N)^{-1}, \] where \( I_N \) is the identity matrix. In fact, the self-expressive matrix shares a similar idea with the widely used self-attention mechanism [Vaswani et al., 2017], i.e., linearly describe each sample with all samples. However, there are major differences between them, and we list them in Appendix A.3. After achieving the closed-form solution \( S^* \), we conduct information aggregation among nodes of the same type in the heterogeneous graph to obtain homophilous representations \( Z \), i.e., \[ Z = S^*H^{(l+1)}. \] However, directly obtaining \( Z \) by Eq. (5) incurs expensive computation costs due to the cubic time complexity in computing \( S^* \) in Eq. (4) and the quadratic time complexity in calculating \( S^*H^{(l+1)} \). To alleviate this issue, we avoid directly calculating \( S^* \) and reformulate Eq. (5) by the matrix identity transformation [Woodbury, 1950] (details are shown in Appendix C.3), i.e., \[ Z = H^{(l+1)}(H^{(l+1)})^T B - \frac{\alpha}{2}DB, \] where \( B = \frac{1}{\beta}H^{(l+1)} - \frac{1}{\beta}H^{(l+1)}(I_d + \frac{1}{\beta}(H^{(l+1)})^TH^{(l+1)})^{-1}(H^{(l+1)})^TH^{(l+1)} \), and \( I_d \in \mathbb{R}^{d \times d} \) is the identity matrix, where \( d \) indicates the dimension of node representations. By reordering the matrix multiplication, we further reduce the time complexity of Eq. (6) to \(O(Nd^2 + d^3 + kd)\), where \(d^2 \ll N\) and \(k \ll N^2\), and \(k\) indicates the nonzero entries of the sparse feature distance matrix \(D\), details are shown in Appendix B.2. Therefore, the proposed method is available to capture the homophily in the heterogeneous graph in an effective and efficient way. In this paper, we further prove that both the self-expressive matrix \(S^*\) and the learned homophilous representations \(Z\) capture the homophily by having the grouping effect. To do this, we first follow [Li et al., 2020] to define the grouping effect as follows. **Definition 2.1.** Given the nodes set \(V = \{v_i\}_{i=1}^{N}\), if \(|c_{ik} - c_{jk}| \to 0 (\forall 1 \leq k \leq F')\) holds for every \(v_i\) and \(v_j\) satisfying \(v_i \rightarrow v_j\) (i.e., \(\|x_i - x_j\|_2 \to 0\)), the matrix \(C \in \mathbb{R}^{N \times F'}\) has the grouping effect. Based on Definition 2.1, if a matrix \(C\) has the grouping effect and the condition \(v_i \rightarrow v_j\) holds for two nodes \(v_i\) and \(v_j\), then every element of the \(i\)-th and the \(j\)-th row \((c_i\) and \(c_j)\) should be aligned. Indeed, the condition \(v_i \rightarrow v_j\) indicates two nodes may belong to the same class, thus the alignment between \(c_i\) and \(c_j\) reflects the homophily among nodes. After that, we follow Definition 2.1 to prove the grouping effect of the self-expressive matrix \(S^*\) and homophilous representations \(Z\) by Theorem 2.2, whose proof can be found in Appendix C.4. **Theorem 2.2.** Both self-expressive matrix \(S^* \in \mathbb{R}^{N \times N}\) and homophilous representations \(Z \in \mathbb{R}^{N \times d}\) have the grouping effect for any two nodes \(v_i\) and \(v_j\) that hold the condition \(v_i \rightarrow v_j\), i.e., \[ v_i \rightarrow v_j \Rightarrow |s^*_{ip} - s^*_{jp}| \to 0, \text{ and } |z_{iq} - z_{jq}| \to 0, \forall 1 \leq q \leq d, 1 \leq i, j, p \leq N. \] Based on Theorem 2.2, if two nodes \(v_i\) and \(v_j\) have similar node features, i.e., being likely to belong to the same class, both their self-expressive vectors (i.e., \(s^*_i\) and \(s^*_j\)) and representations (i.e., \(z_i\) and \(z_j\)) are expected to be similar. As a result, both \(S^*\) and \(Z\) have the grouping effect, thus capturing the homophily in the heterogeneous graph (verified in Section 3.2.3). ### 2.2 Heterogeneity In addition to the homophily, the heterogeneity is also significant for the heterogeneous graph as it may contain task-related information [Zhang et al., 2022a]. However, most existing SHGL methods overlook or cannot effectively utilize the heterogeneity from nodes of different types. As a result, previous SHGL methods may lose discriminative information in the heterogeneity to induce a negative impact on downstream tasks. Therefore, it is necessary to capture the heterogeneity in the heterogeneous graph to improve the effectiveness of SHGL. To do this, we propose to aggregate the information of nodes from different types in the heterogeneous graph. Specifically, for the node \(v_i\), we employ a heterogeneous encoder \(f_\theta\) to aggregate the information of its relevant one-hop neighbors (i.e., nodes of other types) based on the edge type \(r \in R\), and then obtain the edge-based representations \(\tilde{z}^{(l+1)}_{i,r}\) by: \[ \tilde{z}^{(l+1)}_{i,r} = \delta \left( \frac{1}{m} \sum_{j=1}^{m} \left\{ \tilde{z}^{(l)}_j \mid v_j \in N_{i,r} \right\} W^{(l)}_r \right), \] where \(\delta\) indicates the activation function, \(N_{i,r}\) indicates the one-hop neighbors set of the node \(v_i\) based on the edge type \(r\), \(m\) is the number of the neighbors, and \(W^{(l)}_r\) indicates the trainable parameters of \(f_\theta\). Considering all edge types in the heterogeneous graph, we further obtain the heterogeneous representations by fusing all edge-based representations, i.e., \[ \tilde{Z}^{(l+1)} = \frac{1}{|R|} \sum_{r \in R} \tilde{Z}^{(l+1)}_{i,r}, \] where \(|R|\) indicates the number of edge types. Finally, we use \(\tilde{Z}\) to represent the last layer of heterogeneous representations \(\tilde{Z}^{(L)}\) for brief, where \(L\) is the number of layers. As a result, the heterogeneous representations \(\tilde{Z}\) aggregate the information of nodes from different types and thus are expected to capture the heterogeneity in the heterogeneous graph. 2.3 Connections between Homophily and Heterogeneity Given homophilous representations \( Z \) and heterogeneous representations \( \tilde{Z} \), we have the following observations: (i) they are both representations of the same node, sharing the same original node features. Therefore, they are intuitive to contain the consistent information; and (ii) the homophilous representations focus on aggregating the information from nodes of the same class, while the heterogeneous representations focus on aggregating the information from nodes of different types. As a result, homophilous and heterogeneous representations contain specific information within each of them, respectively. To effectively utilize homophilous and heterogeneous representations (\( i.e., Z \) and \( \tilde{Z} \)), we design a consistency loss and a specificity loss to extract the consistent information between them and to maintain their individually specific information related to downstream tasks, respectively. Specifically, we first propose to learn a projection head \( p_\varphi \) to map both homophilous representations and heterogeneous representations into the same latent space, \( i.e., P = p_\varphi(Z) \) and \( \tilde{P} = p_\varphi(\tilde{Z}) \). After that, we design a consistency loss to maximize the invariance between \( P \) and \( \tilde{P} \) by: \[ L_{con} = \sum_{n=1}^{N} (p_n - \tilde{p}_n)^2 + \gamma \log \left( \sum_{i,j=1}^{d} e^{\sum_{n=1}^{N} (p_{nj} + \tilde{p}_{nj})} \right), \] where \( i \) and \( j \) indicate \( i \)-th and \( j \)-th dimensions of \( p_n \), respectively, and \( \gamma \) is a non-negative parameter. In Eq. (10), the first term encourages both \( P \) and \( \tilde{P} \) to agree with each other, thus converging to the consistency. The second term enforces different dimensions of \( P \) and \( \tilde{P} \) to uniformly distribute over the latent space, thus avoiding the issue of model collapse. As a result, Eq. (10) is available to extract the consistent information between homophilous representations and heterogeneous representations. In addition to extracting the consistent information, we aim to preserve the distinct characteristics of homophilous and heterogeneous representations as well as maintain their task-related information. However, we cannot directly add such regularization or constraints on both \( P \) and \( \tilde{P} \), as its goal differs from that of Eq. (10) and may lead to conflicts. To solve this issue, we propose to learn a projection head \( q_\gamma \) to map both homophilous representations and heterogeneous representations into another latent space, \( i.e., Q = q_\gamma(Z) \) and \( \tilde{Q} = q_\gamma(\tilde{Z}) \). After that, we employ a transformation head \( u_\phi \) on \( Q \) to obtain \( Q' = u_\phi(Q) \). We further design a specificity loss to preserve the specific information related to homophilous representations and heterogeneous representations by: \[ L_{spe} = \sum_{n=1}^{N} (q'_n - \tilde{q}_n)^2 - \eta \sum_{n=1}^{N} ((q'_n - q_n)^2 + (q'_n - p_n)^2), \] where \( \eta \) is a non-negative parameter. In Eq. (11), the first term aims to align \( \tilde{Q} \) with \( Q' \), instead of aligning \( \tilde{Q} \) with \( Q \). As a result, it avoids directly aligning the projection of homophilous representations and heterogeneous representations to preserve their distinct information. Actually, if \( u_\phi \) is an identity transformation, the first term in Eq. (11) is the same as the first term in Eq. (10). In addition, even if \( u_\phi \) is not an identity transformation, \( Q' \) may also be equal to \( P \), leading to the redundancy with the consistency loss. To avoid such scenarios, the second term in Eq. (11) enforces the transformed \( Q' \) different from the original \( Q \) and \( P \). As a result, Eq. (11) maintains the respective task-related information of homophilous representations and heterogeneous representations. We integrate the consistency loss with the specificity loss to have the final objective function as: \[ J = L_{con} + \lambda L_{spe}, \] where \( \lambda \) is a non-negative parameter. Finally, we concatenate homophilous representations \( Z \) with heterogeneous representations \( \tilde{Z} \) to obtain final representations \( \hat{Z} \) for downstream tasks. As a result, the concatenated representations \( \hat{Z} \) considering both homophily and heterogeneity in the heterogeneous graph can be theoretically proved to introduce more task-related information by Theorem 2.3, whose proof can be found in Appendix C.5. **Theorem 2.3.** For any downstream task \( T \), the representations with both homophily and heterogeneity (\( e.g., \hat{Z} \)) contain more task-related information than the representations with only homophily (\( e.g., Z \)) or with only heterogeneity (\( e.g., \tilde{Z} \)), \( i.e., \) \[ I(\hat{Z}, T) \geq \max(I(Z, T), I(\tilde{Z}, T)), \] where \( I(\cdot,\cdot) \) indicates the mutual information. Table 1: Classification performance (i.e., Macro-F1 and Micro-F1) on heterogeneous graph datasets. | Method | ACM | Yelp | DBLP | Aminer | |------------|--------------|--------------|--------------|--------------| | | Macro-F1 | Micro-F1 | Macro-F1 | Micro-F1 | | Deep Walk | 73.9±0.3 | 74.1±0.1 | 68.7±1.1 | 73.2±0.9 | | GCN | 86.9±0.2 | 87.0±0.3 | 85.0±0.6 | 87.4±0.8 | | GAT | 85.0±0.4 | 84.9±0.3 | 86.4±0.5 | 88.2±0.7 | | Mp2vec | 87.6±0.5 | 88.1±0.3 | 78.2±0.8 | 83.6±0.9 | | HAN | 89.4±0.2 | 89.2±0.2 | 90.5±1.2 | 90.7±1.4 | | HGT | 91.5±0.7 | 91.6±0.6 | 89.9±0.5 | 90.2±0.6 | | DMGI | 89.8±0.1 | 89.8±0.1 | 82.9±0.8 | 85.8±0.9 | | DMGIlattn | 88.7±0.3 | 88.7±0.5 | 82.8±0.7 | 85.4±0.5 | | HDMI | 90.1±0.3 | 90.1±0.3 | 80.7±0.6 | 84.0±0.9 | | HeCo | 88.3±0.3 | 88.2±0.2 | 85.3±0.7 | 87.9±0.6 | | HGCMCL | 90.6±0.7 | 90.7±0.5 | 90.7±0.8 | 91.0±0.7 | | CPIM | 91.4±0.3 | 91.3±0.2 | 90.2±0.5 | 90.3±0.4 | | HGMMAE | 90.5±0.5 | 90.6±0.7 | 90.5±0.7 | 90.7±0.5 | | DMG | 91.0±0.3 | 90.9±0.4 | 90.8±0.5 | 91.2±0.6 | | HERO | 92.2±0.5 | 92.1±0.7 | 92.4±0.7 | 92.3±0.6 | Theorem 2.3 indicates that considering both homophily and heterogeneity introduces more task-related information than considering them individually, to benefit downstream tasks (verified by the Corollary in Appendix C.6). Therefore, the proposed method is expected to perform better on different downstream tasks than previous SHGL methods that consider only the homophily in the heterogeneous graph (verified in Section 3.2). 3 EXPERIMENTS In this section, we conduct experiments on both heterogeneous and homogeneous graph datasets to evaluate the proposed HERO in terms of different downstream tasks (i.e., node classification and similarity search), compared to heterogeneous and homogeneous graph methods. Detailed settings are shown in Appendix D. Additional experimental results are shown in Appendix E. 3.1 EXPERIMENTAL SETUP 3.1.1 DATASETS The used datasets include five heterogeneous graph datasets and four homogeneous graph datasets. Heterogeneous graph datasets include three academic datasets (i.e., ACM [Wang et al., 2019], DBLP [Wang et al., 2019], and Aminer [Hu et al., 2019]), one business dataset (i.e., Yelp [Lu et al., 2019]), and one huge knowledge graph dataset (i.e., Freebase [Lv et al., 2021]). Homogeneous graph datasets include two sale datasets (i.e., Amazon-Photo and Amazon-Computers [Shchur et al., 2018]), and two co-authorship datasets (i.e., Coauthor-CS and Coauthor-Physics [Sinha et al., 2015]). 3.1.2 COMPARISON METHODS The comparison methods include eleven heterogeneous graph methods and twelve homogeneous graph methods. The former includes two semi-supervised methods (i.e., HAN [Wang et al., 2019] and HGT [Hu et al., 2020]), one traditional unsupervised method (i.e., Mp2vec [Dong et al., 2017]), and eight self-supervised methods (i.e., DMGI [Park et al., 2020], DMGIlattn [Park et al., 2020], HDMI [Jing et al., 2021], HeCo [Wang et al., 2021], HGCMCL [Wang et al., 2023], CPIM [Mo et al., 2023b], HGMAE [Tian et al., 2023], and DMG [Mo et al., 2023a]). The latter includes two semi-supervised methods (GCN [Kipf & Welling, 2017] and GAT [Velickovic et al., 2018]), one traditional unsupervised method (i.e., DeepWalk [Perozzi et al., 2014]), and nine self-supervised methods, (i.e., DGI [Velickovic et al., 2019], GMI [Peng et al., 2020], MVGRL [Hassani & Khasahmadi, 2020], GRACE [Zhu et al., 2020b], GCA [Zhu et al., 2021], GIC [Mavromatis & Karypis, 2021], G-BT [Bielak et al., 2022], COSTA [Zhang et al., 2022b], and DSSL [Xiao et al., 2022]). Table 2: Classification performance (i.e., Macro-F1 and Micro-F1) on homogeneous graph datasets, where OOM indicates Out-Of-Memory. | Method | Amazon-Photo | Amazon-Computers | Coauthor-CS | Coauthor-Physics | |----------|--------------|------------------|-------------|-----------------| | | Macro-F1 | Micro-F1 | Macro-F1 | Micro-F1 | | Deep Walk| 87.4±0.5 | 89.7±0.3 | 84.0±0.3 | 85.6±0.4 | | GCN | 90.5±0.3 | 92.5±0.2 | 84.0±0.4 | 86.4±0.3 | | GAT | 90.2±0.5 | 91.8±0.4 | 83.2±0.2 | 85.7±0.4 | | DGI | 89.3±0.2 | 91.6±0.3 | 79.3±0.3 | 83.9±0.5 | | GMI | 89.3±0.4 | 90.6±0.2 | 80.1±0.4 | 82.2±0.4 | | MVGRL | 90.1±0.3 | 91.7±0.4 | 84.6±0.6 | 86.9±0.5 | | GRACE | 90.3±0.5 | 91.9±0.3 | 84.2±0.3 | 86.8±0.5 | | GCA | 91.1±0.4 | 92.4±0.4 | 85.9±0.5 | 87.7±0.3 | | GIC | 90.0±0.3 | 91.6±0.2 | 82.6±0.4 | 84.9±0.3 | | G-BT | 91.5±0.4 | 92.6±0.6 | 86.2±0.3 | 88.1±0.5 | | COSTA | 91.3±0.4 | 92.5±0.3 | 86.4±0.3 | 88.3±0.4 | | DSSL | 90.6±0.2 | 92.1±0.3 | 85.6±0.3 | 87.3±0.4 | | HERO | 91.8±0.4 | 93.0±0.3 | 85.7±0.6 | 88.4±0.5 | For a fair comparison, we follow Dong et al. (2017), Wang et al. (2019), Lu et al. (2019), Lv et al. (2021) to select meta-path-based graphs for previous meta-path-based SHGL methods. Moreover, we follow Mo et al. (2023a) to conduct homogeneous graph methods on heterogeneous graph datasets by separately learning the representations of each meta-path-based graph and further concatenating them for downstream tasks. In addition, we replace the heterogeneous encoder $f_\theta$ with GCN to implement the proposed method on homogeneous graph datasets because there is only one node type in the homogeneous graph. The code is released at https://github.com/YujieMo/HERO ### 3.2 Results Analysis #### 3.2.1 Effectiveness on the Heterogeneous Graph We first evaluate the effectiveness of the proposed method on the heterogeneous graph datasets by reporting the results of node classification (i.e., Macro-F1 and Micro-F1) in Table 1 and Appendix E, and reporting the results of similarity search (i.e., Sim@5 and Sim@10) in Appendix E. Obviously, our method achieves superior performance on both node classification and similarity search tasks. First, for the node classification task, the proposed method always outperforms the comparison methods by large margins. For example, the proposed method on average, improves by 2.1%, compared to the best SHGL method (i.e., DMG), on four heterogeneous graph datasets. The reason can be attributed to the fact that the proposed method extracts both homophily and heterogeneity in the heterogeneous graph, thus introducing more task-related information to improve the effectiveness of the classification task. Second, for the similarity search task, the proposed method also obtains promising improvements. For example, the proposed method on average, improves by 1.8%, compared to the best SHGL method (i.e., DMG), on four heterogeneous graph datasets. This demonstrates the superiority of the proposed method, which captures the homophily in the heterogeneous graph, enforcing the representations to have the grouping effect and thus increasing the similarity of nodes within the same class. As a result, the effectiveness of the proposed method is verified on the heterogeneous graph datasets in terms of different downstream tasks. #### 3.2.2 Effectiveness on the Homogeneous Graph We further evaluate the effectiveness of the proposed method on the homogeneous graph datasets by reporting the results of node classification (i.e., Macro-F1 and Micro-F1) in Table 2. We can observe that the proposed method achieves competitive results on the homogeneous graph datasets. First, compared to the semi-supervised baselines (i.e., GCN and GAT), the proposed method always achieves the best results. For example, the proposed method on average, improves by 1.1%, compared to the best semi-supervised method (i.e., GCN), on four homogeneous graph datasets. Second, compared to the self-supervised methods, the proposed method also achieves superior performance. Figure 3: (a) and (b) indicate node correlation maps of ACM and Yelp datasets reordered by node labels. (c) and (d) indicate node correlation maps and corresponding visualizations (top 30% values in the correlation map are visualized as edges) of example graphs of ACM and Yelp datasets. For example, the proposed method outperforms the best self-supervised method (i.e., COSTA), on almost all homogeneous graph datasets. This indicates that the proposed method extracts the consistent information between the original graph and the graph with homophily, as well as preserves the specific information within each of them to benefit downstream tasks. As a result, the effectiveness of the proposed method is further verified on the homogeneous graph datasets. 3.2.3 Visualization and Case Study Visualization of Grouping Effect. To verify the grouping effect of homophilous representations, we visualize the node correlation maps of ACM and Yelp datasets in Figures 3(a) and 3(b), where rows and columns are reordered by node labels. In the correlation map, the darker a pixel, the higher the correlation between nodes. In Figures 3(a) and 3(b), the correlation maps exhibit a block diagonal structure where the nodes of each block belong to the same class. This indicates that if two nodes belong to the same class, then the correlation of their representations will be high, i.e., their representations are expected to be aligned. This verifies the grouping effect of the homophilous representations, which capture the homophily in the heterogeneous graph. Visualization of Example Graph. To further verify that the homophilous representations indeed capture the homophily in the heterogeneous graph, we sample example graphs from ACM and Yelp datasets and visualize the correlation maps among sampled nodes in Figures 3(c) and 3(d). Moreover, we visualize the top 30% values in the correlation maps as edges between nodes to make them more intuitive. In Figures 3(c) and 3(d), we have the observations as follows. First, two nodes (e.g., node 2 and node 11) within the same class do have high correlation values, while two nodes (e.g., node 2 and node 1) from different classes do have low correlation values. Second, the visualized edges in Figures 3(c) and 3(d) indeed show a high homophily rate (e.g., 73% in the example graph of the Yelp dataset). This further verifies that the homophilous representations indeed capture the homophily in the heterogeneous graph without pre-defined meta-paths. 4 Conclusion In this paper, we proposed a self-supervised heterogeneous graph learning framework to capture both homophily and heterogeneity in the heterogeneous graph without pre-defined meta-paths. To do this, we proposed to learn a self-expressive matrix adaptively and employ the heterogeneous encoder to obtain homophilous and heterogeneous representations for capturing homophily and heterogeneity in the heterogeneous graph, respectively. We further designed the consistency loss and the specificity loss to extract the consistent information between homophilous representations and heterogeneous representations and to maintain their specific information in different latent spaces, respectively. Theoretical analysis indicates that the homophilous representations capture the homophily in the heterogeneous graph. In addition, the fused representations are provable to contain more task-related information than the representations with homophily or heterogeneity only, thus benefiting downstream tasks. Extensive experimental results demonstrate the effectiveness of the proposed method on both homogeneous and heterogeneous graph datasets in terms of different downstream tasks. We discuss potential limitations and future work in Appendix F. 5 ACKNOWLEDGMENTS This work was supported in part by the National Key Research and Development Program of China under Grant 2022YFA1004100, in part by the Medico-Engineering Cooperation Funds from University of Electronic Science and Technology of China under Grant ZYGX2022YGRH009 and Grant ZYGX2022YGRH014, in part by the National Natural Science Foundation of China under Grant 62276052. REFERENCES Piotr Bielak, Tomasz Kajdanowicz, and Nitesh V. Chawla. Graph barlow twins: A self-supervised representation learning framework for graphs. Knowledge-Based Systems, 256:109631, 2022. Olivier Chapelle, Jason Weston, and Bernhard Schölkopf. Cluster kernels for semi-supervised learning. In NeurIPS, volume 15, 2002. Bo Chen, Jing Zhang, Xiaokang Zhang, Yuxiao Dong, Jian Song, Peng Zhang, Kaibo Xu, Evgeny Kharlamov, and Jie Tang. Gccad: Graph contrastive coding for anomaly detection. IEEE Transactions on Knowledge and Data Engineering, 2022. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In ICML, pp. 1597–1607, 2020. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, David Belanger, Lucy Colwell, et al. Masked language modeling for proteins via linearly scalable long-context transformers. In ICLR, 2020. Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, and Marin Soljacic. Equivariant contrastive learning. In ICLR, 2022. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pp. 4171–4186, 2019. Yuxiao Dong, Nitesh V. Chawla, and Ananthram Swami. metapath2vec: Scalable representation learning for heterogeneous networks. In SIGKDD, pp. 135–144, 2017. Meir Feder and Neri Merhav. Relations between entropy and error probability. IEEE Transactions on Information Theory, 40(1):259–266, 1994. Andrea Galassi, Marco Lippi, and Paolo Torroni. Attention in natural language processing. IEEE Transactions on Neural Networks and Learning Systems, 32(10):4291–4308, 2020. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. In NeurIPS, volume 33, pp. 21271–21284, 2020. Kaveh Hassani and Amir Hosein Khasahmadi. Contrastive multi-view representation learning on graphs. In ICML, pp. 4116–4126, 2020. Xiaofei He, Shuicheng Yan, Yuxiao Hu, and Hong-Jiang Zhang. Learning a locality preserving subspace for visual recognition. In ICCV, pp. 385–392, 2003. R. Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Philip Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In ICLR, pp. 1–24, 2019. Binbin Hu, Yuan Fang, and Chuan Shi. Adversarial learning on heterogeneous information networks. In SIGKDD, pp. 120–129, 2019. Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. Heterogeneous graph transformer. In WWW, pp. 2704–2710, 2020.
6MRm3G4NiU
It seemed unintuitive to use the cross-product of residues and Foldseek tokens to form the vocabulary. This seems to discard information about which vocab elements share the same residue or Foldseek token. Did you try other alternatives, e.g. by concatenating embeddings for the residue and Foldseek token to form the input representations?
SaProt: Protein Language Modeling with Structure-aware Vocabulary Jin Su\textsuperscript{1,2,*}, Chenchen Han\textsuperscript{2}, Yuyang Zhou\textsuperscript{2}, Junjie Shan\textsuperscript{2}, Xibin Zhou\textsuperscript{2}, Fajie Yuan\textsuperscript{2†} Zhejiang University\textsuperscript{1}, Westlake University\textsuperscript{2} \{sujin, hanchenchen, zhouyuyang, shanjunjie, zhouxibin, yuanfajie\}@westlake.edu.cn Abstract Large-scale protein language models (PLMs), such as the ESM family, have achieved remarkable performance in various downstream tasks related to protein structure and function by undergoing unsupervised training on residue sequences. They have become essential tools for researchers and practitioners in biology. However, a limitation of vanilla PLMs is their lack of explicit consideration for protein structure information, which suggests the potential for further improvement. Motivated by this, we introduce the concept of a “structure-aware vocabulary” that integrates residue tokens with structure tokens. The structure tokens are derived by encoding the 3D structure of proteins using Foldseek. We then propose SaProt, a large-scale general-purpose PLM trained on an extensive dataset comprising approximately 40 million protein sequences and structures. Through extensive evaluation, our SaProt model surpasses well-established and renowned baselines across 10 significant downstream tasks, demonstrating its exceptional capacity and broad applicability. We have made the code\textsuperscript{†} pretrained model, and all relevant materials available at \url{https://github.com/westlake-repl/SaProt}. 1 Introduction Proteins are fundamental to biological functions, and understanding them opens promising avenues in medical, pharmaceutical, and genetic research. Protein Language Models (PLMs), drawing inspiration from NLP methodologies, have emerged as the pivotal technology for representing proteins (Rao et al., 2019). Through self-supervised training on vast amounts of protein 1D residue sequences, PLMs have proven highly proficient in capturing long-range residue correlations, i.e., co-evolution (Anishchenko et al., 2017; Rao et al., 2020). Moreover, prominent PLMs like UniRep (Alley et al., 2019), ProtTrans (Elnaggar et al., 2021), ESM (Rives et al., 2019; Meier et al., 2021; Rao et al., 2021; Lin et al., 2022), and Evoformer (Hu et al., 2022; Jumper et al., 2021) have showcased outstanding performance across a diverse array of tasks pertaining to protein structure and function. Despite the success of residue sequence-based pre-training, there’s a growing interest in utilizing protein 3D structures as training data, given their direct relevance to functions. Some work has demonstrated the potential of pre-training on experimentally determined protein structures (Yang et al., 2022; Hermosilla & Ropinski, 2022), but they are limited by the smaller number of highly accurate structures compared to residue sequences. Meanwhile, the breakthrough achieved by AlphaFold2 (AF2) (Jumper et al., 2021) in protein structure prediction has resulted in a substantial repository of structure data (Varadi et al., 2021), thus igniting interests in utilizing large-scale protein structures for training PLMs. *Work done at Westlake University. †Corresponding Author. Fajie conceived and supervised this research. Jin conducted this research. Jin, Junjie, Fajie proposed the new vocabulary together by discussing. Chenchen performed the mutational effect prediction task. Yuyang and Xibin collected the dataset. Jin, Fajie wrote the paper. Unlike ESM models that only offer inference code, we provide code for both training and inference. Currently, the development of structure-based PLMs based on large-scale predicted structures is still in an early stage, and existing research has certain limitations. For instance, well-known models like GearNet (Zhang et al., 2023b), still depend on a limited set of protein structures, utilizing around 800 thousand predicted structures from AF2. On the other hand, models like ESM-IF (Hsu et al., 2022) focus exclusively on specific protein tasks, such as protein inverse folding, rather than aiming for broader and more general-purpose representations. In this paper, we aim to contribute to the biological community by introducing a large and more powerful PLM trained on extensive protein sequence and structure data. To achieve this, we introduce a “structure-aware (SA) vocabulary” that encompasses both residue and structure information of proteins. Specifically, we can employ vector quantization techniques (Van Den Oord et al., 2017) to discretize protein structures into 3D tokens. These tokens, similar in format to residue tokens, capture the geometric conformation information of each residue in relation to its spatial neighbors. Here, we simply utilize Foldseek (van Kempen et al., 2023), a purpose-built tool. Then, by combining the 3D tokens with residue tokens, we devise a very intuitive yet innovative vocabulary termed the SA alphabet. This enables the conversion of the original residue sequence into an SA-token sequence, serving as the input for existing residue-based PLMs. Through unsupervised training on massive protein SA-token sequences, we obtain a Structure-aware Protein language model named SaProt. To assess its performance, we comprehensively evaluate its capabilities across 10 widely recognized protein tasks. These tasks encompass a broad range of applications, including clinical disease variant prediction (Frazer et al., 2021), fitness landscape prediction (Dallago et al., 2021; He et al., 2024), protein-protein interaction (Nooren & Thornton, 2003), as well as diverse protein function predictions (Bileschi et al., 2022; Yu et al., 2023). To summarize, our main contributions are as follows: - We introduce a structure-aware vocabulary that combines residue and 3D geometric feature for proteins. With the utilization of “SA” tokens, proteins, encompassing both primary and tertiary structures, can be effectively represented as a sequence of these novel tokens. The sequential nature, rather than the graph structure, of protein representation allows for seamless integration with advances in large-scale foundation AI models, such as BERT (Devlin et al., 2018), BART (Lewis et al., 2019), GPT (Brown et al., 2020), etc. - By utilizing the SA-token protein sequence as input, we train a structure-enhanced PLM using the ESM (Lin et al., 2022) backbone as a case study, called SaProt. To our knowledge, SaProt stands out as the PLM currently trained with the largest number of protein structures, containing 650 million parameters. Its training lasted 3 months and utilized 64 NVIDIA 80G A100 GPUs, with a computational cost similar to ESM-1b (Rives et al., 2019). - We evaluate SaProt across 10 renowned biological tasks. SaProt consistently exhibited improved performance compared to strong baselines, particularly models from the ESM family, including ESM-1b, ESM-1v (Meier et al., 2021), and ESM-2 (Lin et al., 2022), which are considered leading PLMs in the field. - We conduct a series of enlightening ablation studies, unveiling previously unknown findings. One such finding is the potential overfitting issues that may arise when training PLMs by integrating predicted structures with BERT-style training. This discovery highlights a crucial consideration in the design of protein structure-based PLMs. Additionally, our experimental section sheds light on several intriguing observations through dissecting SaProt. Additionally, we have made our code, model weight, and the associated datasets openly available. These materials are expected to be valuable for both the computational and biological communities. 2 RELATED WORK 2.1 RESIDUE SEQUENCE-BASED PRE-TRAINING Sequence-based pre-training methods treat protein residue sequences as natural language, enabling comprehensive representations via masked language modeling (MLM) (Devlin et al., 2018). Formally, a protein sequence is denoted as $P = (s_1, s_2, \ldots, s_n)$, where $s_i$ is a residue at the $i^{th}$ position. Prior work employs GNNs for protein structure modeling, but GNNs suffer from the over-smoothing issue (Huang et al., 2021; Chen et al., 2020), thereby hindering large and deep protein model development. and \( n \) is the sequence length. During pre-training, a set of residues are randomly masked, resulting in the modified sequence \( P_{\text{mask}} = (s_1, < \text{MASK} >, ..., s_n) \). The training objective is to predict masked residues by capturing dependencies between masked positions and surrounding context. Residue-based PLMs have shown potential in generating universal protein representations. Rives et al. (2019), Heinzinger et al. (2019) and Vig et al. (2020) substantiate the ability of PLMs to predict protein structures and functions, while Rao et al. (2021) enhances capabilities via training on Multiple Sequence Alignment (MSA) data. For mutational effect prediction, Meier et al. (2021) and He et al. (2024) adopt ESM-1v for zero-shot prediction, and Notin et al. (2022) incorporate MSA as supplementary signals. Additionally, Lin et al. (2022), Chowdhury et al. (2022) and Wu et al. (2022b) predict protein structures from single sequences by applying large PLMs. ### 2.2 Structure-based Pre-training Protein structure governs its function. The release of 200 million protein structures in AlphaFoldDB (Varadi et al., 2022) in July 2022 enables the construction of large-scale protein structure models. Protein structures are usually represented as graphs, denoted by \( G = (V, E) \), with \( V \) representing the set of \( N \) residues and \( E \) representing the set of edges connecting the residues. These edges are typically based on the \( C_\alpha \) distances between the residues. GNNs utilize \( G \) for diverse pre-training strategies like contrastive learning (Hermosilla & Ropinski, 2022; Zhang et al., 2023b,a), self-prediction (Yang et al., 2022; Chen et al., 2023) and denoising score matching (Guo et al., 2022; Wu et al., 2022a). Another way inspired by AF2 involves incorporating structure features as contact biases into the attention maps within the self-attention module, e.g., Uni-Mol (Zhou et al., 2023). However, the above structure-based models rely on either real structures from the Protein Data Bank (PDB) or a limited number of predicted AF2 structures. To the best of our knowledge, there are currently no “general-purpose” PLMs based on a large-scale set of predicted structures. ### 2.3 Foldseek The initial goal of Foldseek (van Kempen et al., 2022) is to facilitate fast and accurate protein structure searches. To achieve this, Foldseek employs a VQ-VAE model (Van Den Oord et al., 2017) for encoding protein structures into informative tokens. These tokens, derived from 20 distinct 3Di states, are represented as \( P = (f_1, f_2, ..., f_n) \), where \( f_i \) represents the structure token at the \( i \)-th position and \( n \) is the sequence length. Foldseek achieves this encoding by identifying nearest neighbors and extracting features for individual residues. A preprint by Heinzinger et al. (2023) introduces ProstT5, which enables bidirectional conversion between residue and Foldseek token sequences. ProstT5 excels at tasks like remote homology detection and protein design. However, it is not considered a general-purpose PLM (see Appendix A). Figure 2: Loss trends for three protein structure models. The training set is AF2 structures while in the validation set, one is AF2 structures and the other comprises real structures from PDB. 3 Idea of New Vocabulary 3.1 Preliminary Analysis The goal of this paper is to develop a general-purpose PLM by leveraging predicted protein structures to serve multiple protein prediction tasks. Contrastive learning (CL) and BERT-style MLM training are currently two most prevalent pre-training approaches. However, CL primarily emphasizes on protein-level representation learning and performs poorly at the residue-level task. For instance, GearNet (Zhang et al., 2023b) and 3D-PLM (Hermosilla & Ropinski, 2022) trained by CL are not directly useful for predicting effects of amino acid mutations (Frazer et al., 2021). We initially explored two intuitive approaches for protein structure modeling. The first approach involves treating the predicted structures from AF2 as a graph and employing GNNs for modeling, following Yang et al. (2022). The second approach is to extract the distance and angle information between pairwise residues from structures, incorporating it as a structure bias in a Transformer (Vaswani et al., 2017) attention map. This approach was applied by Uni-Mol, ESMFold (Lin et al., 2022) and Evoformer. We evaluate the two models using the MLM objective as it can support both protein-level and residue-level tasks. It should be noted that the structure model in Uni-Mol, ESMFold, and Evoformer were initially designed for specific tasks with different loss functions, rather than being intended as general-purpose PLM. Therefore, it remains uncertain whether these neural networks would be effective when trained with predicted structures using the MLM objective. Through two exploratory experiments, we noted that training directly using predicted structures yielded poor performance on the validation set containing real PDB structures (Figure 2). The decrease in loss on predicted structures did not correspond to a decrease in loss on real structures. This mismatch may be due to the fact that PLM has detected traces of AF2 predictions. Furthermore, inferior results were reported in downstream tasks (Table 3). Despite a substantial loss decrease on training data, these models failed to learn meaningful representations for downstream protein tasks. 3.2 Structure-aware Vocabulary Inspired by the above discoveries, we aim to incorporate protein structures from a novel perspective. Our key idea revolves around creating a structure-aware (SA) vocabulary, where each SA token encompasses both residue and structure information, as illustrated in Figure 1. --- 3 Note that MIF by Yang et al. (2022) utilized only real structures for pre-training, so it is unclear whether the massive predicted structures from AF2 would be beneficial or not. 4 As a basic analysis, we utilized the 35M version of ESM-2 (see Appendix E.1.1) and SaProt. The MIF is consistent with the one described in the original paper, with a size of 3.4M. Given a protein $P$, its primary sequence can be denoted as $(s_1, s_2, ..., s_n)$, where $s_i \in V$ represents the residue at the $i$th site, and $V$ represents residue alphabet. Building upon the concept of Foldseek, we can introduce an alternative approach for representing protein tertiary structures by using a vector quantized variational autoencoder (Van Den Oord et al., 2017). This approach enables us to develop a structure alphabet $F$, wherein $P$ can be represented as the $(f_1, f_2, ..., f_n)$ sequence, with $f_j \in F$ denoting the structure token for the $j$th residue site. To maintain simplicity, we directly adopt the default setup of Foldseek, which defines the size $m$ of $F$ as 20. Now, we can combine the residue and structure tokens per residue site, generating a new structure-aware sequence $P = (s_1 f_1, s_2 f_2, ..., s_n f_n)$, where $s_i f_i \in V \times F$ is the token fusing both residue and geometric conformation information. The structure-aware sequence can then be fed into a standard Transformer encoder as basic input. It’s important to note that we also introduce a mask signal “#” to both residue and structure alphabet, which results in “$s_i#$” and “#$f_i$” that indicate only residue or structure information is available. The size of the SA vocabulary is $21 \times 21 = 441$ (see Figure 1). The design of this new vocabulary is simple yet innovative and fundamental, enabling the representation of any residue sequence using this “SA” sequence. As a result, protein models that utilize residue sequences as input can effortlessly integrate the new vocabulary sequence as a substitute. ### 3.3 SaProt #### 3.3.1 Model Architecture SaProt employs the same network architecture and parameter size as the 650M version of ESM-2. The main distinction lies in the expanded embedding layer, which encompasses 441 SA tokens instead of the original 20 residue tokens. This nearly identical architecture enables straightforward comparisons with the ESM model. Moreover, the model size strikes a balance between performance and feasibility for downstream task training, avoiding excessive memory or computation cost. #### 3.3.2 Objective Function We train SaProt using the BERT-style MLM objective, similar to ESM-1b and ESM-2, enabling the support for both protein-level and residue-level tasks. Formally, For a protein sequence $P = (s_1 f_1, s_2 f_2, ..., s_n f_n)$, the input and output can be represented as: $$\text{input} : (s_1 f_1, ..., #f_i, ..., s_n f_n) \rightarrow \text{output} : s_i f_i$$ (see Figure 1). $f_i$ in #$f_i$ is made visible during training to reduce the model’s emphasis on predicting it. This is different from the straightforward masking strategy, i.e. randomly masking SA token $s_i f_i$ by “# #”, and then predicting both residue and structure token directly from the SA vocabulary (see Appendix Figure 7). We do not adopt this strategy because the SA tokens may be not accurate enough. Predicting the exact SA tokens may lead the model in the wrong optimization direction. With the proposed masking objective, although there are still inaccuracies in certain Foldseek tokens, the global structure information should remain effective, which provides valuable context for the prediction. From this perspective, it is more reasonable to predict the residue tokens rather than the Foldseek structural tokens or both of them. We perform the empirical study on the two masking strategies during pre-training in Appendix F. To ensure a fair comparison, SaProt was pre-trained using identical training strategies with ESM-2 (refer to Appendix C). We build the pre-training dataset, which consists of approximately 40 million AF2 structures. Details are included in Appendix B, including how to proceed with the lower pLDDT region. ### 4 Experiments We evaluate SaProt across 10 diverse downstream tasks, encompassing residue-level and protein-level tasks. Given that many proteins in the original datasets lack experimentally determined structures, we conduct all evaluations using predicted structures obtained from AlphaFoldDB without special mention. Furthermore, proteins without structures in AlphaFoldDB will not be utilized in all our experiments. --- 5The accuracy of SA tokens depends on the accuracy of both AF2 and Foldseek. 4.1 Zero-shot Mutational Effect Prediction 4.1.1 Datasets We adopt the ProteinGym (Notin et al., 2022) benchmark and ClinVar (Landrum et al., 2018) dataset used in Frazer et al. (2021) to evaluate the performance of SaProt on the zero-shot mutational effect prediction tasks (Meier et al., 2021). For dataset details, we refer readers to Appendix D.2.1. | Dataset | ESM-2 | ESM-1v | ESM-1b | Tranception L | ESM-IF | MIF-ST | EVE | MSA Transformer | SaProt | |--------------------------|-------|--------|--------|---------------|--------|--------|-----|----------------|--------| | ClinVar | 0.862 | 0.891 | 0.900 | 0.845 | 0.748 | 0.891 | 0.878 | 0.854 | **0.909** | | ProteinGym (w/o MSA retrieval) | 0.475 | 0.448 | 0.440 | 0.413 | 0.409 | 0.474 | - | - | **0.478** | | ProteinGym (w/ MSA retrieval) | 0.479 | 0.472 | 0.472 | 0.465 | 0.425 | 0.480 | 0.477 | 0.464 | **0.489** | Table 1: Zero-shot mutational effect prediction. ClinVar uses AUC (area under the ROC curve) and ProteinGym uses Spearman’s \( \rho \) as evaluation metric. They are two distinct biological tasks. 4.1.2 Baselines & Evaluation We compare SaProt with two types of baselines: sequence-based models and structure-based models. For sequence-based models, we include ESM-1b (Rives et al., 2019), ESM-1v (Meier et al., 2021) (the results of 5 ESM models are averaged), ESM-2 650M (Lin et al., 2022)\(^6\) and Tranception L (Notin et al., 2022). For structure-based models, we consider the MIF-ST (Yang et al., 2022) and ESM-IF (Hsu et al., 2022). Additionally, we present the performance of EVE (Frazer et al., 2021), a renowned model that leverages MSA information for predicting disease variant effects, and MSA Transformer (Rao et al., 2021), a protein language model pre-trained on large scale of MSA data (we sample 384 homologous proteins for inference following Notin et al. (2022)). Here, we did not include comparisons with contrastive learning models like GearNet and 3D-PLM, as they are not directly applicable to residue-level zero-shot prediction tasks. Also note that with the exception of EVE on ProteinGym, all baseline models and their weights used in this study were obtained from the official paper. We solely employed them for prediction without any training. We trained EVE on ProteinGym ourselves using the official code as it necessitates training on each MSA. We strictly follow the evaluation used in EVE (Frazer et al., 2021) for assessing the model’s performance on the ClinVar dataset. For the ProteinGym dataset, we employ the evaluation measures described in (Notin et al., 2022; Meier et al., 2021). Details are provided in Appendix D.2.2. 4.1.3 Results Table 1 shows the zero-shot results on ProteinGym & ClinVar, resulting in the below conclusions: • SaProt outperforms all residue sequence-based and structure-based models on both tasks. As mentioned earlier, SaProt shares an identical network architecture, model size, and training examples with ESM-2, with the key difference lying in its structure-aware vocabulary. By comparing SaProt with ESM-2, it clear that SaProt yields consistent improvement for predicting mutational effects. Then, SaProt shows higher accuracy compared to MIF-ST, even though the latter model was trained using experimentally determined highly accurate structures.\(^7\) The benefit could be attributed to the large-scale structures when training SaProt. ESM-IF exhibits the poorest performance in both tasks, primarily because it was originally designed for the inverse folding task. In addition, ESM-IF model size and training data are nearly 5 times smaller than SaProt. • MSA information enhances models’ zero-shot ability. Notin et al. (2022) introduces a technique to enhance autoregressive inference by leveraging MSA information, leading to a consistent improvement. Following it, we extend the technique to SaProt and all baselines. The results show that the integration of MSA information greatly enhances the zero-shot prediction ability of various PLMs, with SaProt still achieving the highest accuracy among them. The results also suggest that the improvement techniques used for residue sequence-based models are likely to be useful to SaProt as well. \(^6\)The results for 15B ESM-2 are reported in the Appendix D.2.3 which shows worse results. \(^7\)MIF-ST exhibits poor accuracy when trained with AF2 structures, as shown in Figure 2 & Appendix E.1.2 4.2 Supervised Fine-tuning Tasks | Model | Thermostability | HumanPPI | Metal Ion Binding | EC | GO | DeepLoc | |----------------|-----------------|----------|-------------------|----|----|---------| | | Spearman’s ρ | Acc% | Acc% | Fmax | Fmax | Fmax | Acc% | Acc% | | ESM-2 | 0.680 | 76.67 | 71.56 | 0.868 | 0.670 | 0.473 | 0.470 | 82.09 | 91.96 | | ESM-1b | 0.708 | 82.22 | 73.57 | 0.864 | 0.656 | 0.451 | 0.466 | 80.33 | 92.83 | | MIF-ST | 0.694 | 75.54 | 75.08 | 0.807 | 0.633 | 0.375 | 0.322 | 78.96 | 91.76 | | GearNet | 0.571 | 73.86 | 71.26 | 0.874 | 0.644 | 0.481 | 0.476 | 69.45 | 89.18 | | SaProt | **0.724** | **86.41**| **75.75** | **0.882** | **0.682** | **0.486** | **0.479** | **85.57** | **93.55** | | ESM-GearNet | 0.651 | 84.09 | 74.11 | 0.887 | 0.676 | 0.516 | 0.507 | 82.30 | 92.94 | | SaProt-GearNet | 0.660 | 85.80 | 74.44 | 0.889 | 0.678 | 0.522 | 0.508 | 84.16 | 93.63 | Table 2: Experimental results on 8 downstream tasks. 4.2.1 Datasets For protein-level tasks, we evaluate SaProt on a diverse set of datasets from several benchmarks (Dallago et al., 2021; Xu et al., 2022; Rao et al., 2019), including predicting Thermostability, Metal Ion Binding, protein localization (DeepLoc), protein annotations (EC and GO) and protein-protein interaction (HumanPPI). Dataset description and splits are listed in Appendix D.3. 4.2.2 Baselines In addition to the above baselines, we compared SaProt to GearNet (Zhang et al., 2023b). Inspired by ESM-GearNet (Zhang et al., 2023a), we replaced the ESM module in ESM-GearNet with SaProt, resulting in an ensemble model called SaProt-GearNet. Training details are in Appendix D.3.3. 4.2.3 Results Experimental results are illustrated in Table 2, shedding light on the following insights: • SaProt outperforms ESM-2 in all protein-level tasks. Specifically, SaProt shows remarkable enhancements over ESM-2 in the Thermostability, HumanPPI, Metal Ion Binding, and DeepLoc tasks. This outcome once again demonstrates that integrating structure information into PLMs leads to superior protein representation. • SaProt outperforms the two structure models, GearNet & MIF-ST, by a substantial margin. This notable performance difference highlights the efficacy of structure modeling in SaProt. • While SaProt outperforms the ESM models, SaProt-GearNet also outperforms ESM-GearNet, which highlights the orthogonality of SaProt with more advanced improvement techniques. However, it is interesting to note that combining two models does not always result in higher performance. For example, SaProt-GearNet and ESM-GearNet do not necessarily surpass their respective single models SaProt and ESM. SaProt exhibits superior performance across all tasks, when also considering the results in Section 4.1.3. Its impressive performance positions it as a compelling alternative to the ESM family. 5 Analysis We conduct insightful ablation studies by dissecting SaProt. One can find more analysis in Appendix E, including comparisons of masking strategies, masking rates on structure tokens, etc. 5.1 Awareness Of Protein Structure SaProt incorporates protein structure information by using structure-aware tokens rather than using explicit 3D coordinates. However, this approach relies on the accuracy of the Foldseek encoding. | Model | Short Range | Medium Range | Long Range | |---------------|-------------|--------------|------------| | | P@L | P@L/2 | P@L/5 | P@L | P@L/2 | P@L/5 | P@L | P@L/2 | P@L/5 | | ESM-2 | 44.87 | 44.90 | 50.18 | 45.21 | 45.80 | 53.90 | 35.33 | 41.96 | 52.11 | | SaProt (Residue-only) | 40.29 | 40.22 | 44.10 | 36.26 | 36.69 | 42.47 | 22.15 | 27.63 | 36.68 | | SaProt | 57.11 | 57.20 | 63.71 | 53.43 | 55.05 | 66.45 | 48.14 | 59.75 | 74.32 | Table 3: Results on contact prediction. Short range, medium range and long range contacts are contacts between positions that are separated by 6 to 11, 12 to 23 and 24 or more positions, respectively. Naturally, a question arises: does SaProt truly possess stronger structure information compared to ESM-2, given that residue-based PLMs also implicitly contain structure information (Rao et al., 2020)? To answer it, we conduct an additional structure prediction task, namely contact map prediction on the TAPE benchmark (Rao et al., 2019). For both SaProt & ESM-2, we freeze the backbone and solely fine-tune the contact head. The evaluation of contact map is conducted using PDB data. As shown in Table 3, SaProt exhibits remarkable superiority over ESM-2 in the contact map prediction task, evidencing that SaProt contains more accurate structural information. From this perspective, PLM with enhanced structure feature is expected to exhibit improved accuracy in protein function prediction tasks. We additionally evaluate SaProt’s performance with all structure tokens masked as “#”, named “SaProt (Residue-only)”. SaProt (Residue-only) performs worse than ESM-2 but still exhibits a certain degree of structure prediction ability. This result demonstrates that SaProt is capable of capturing structural information even when the structure tokens are not given. To further study the impact of structure and residue tokens on SaProt’s performance, we conduct an additional zero-shot prediction experiment. We randomly replace a percentage of structure tokens with random (Foldseek) tokens while keeping the residues unchanged, and then we do the opposite for residue tokens. SaProt’s performance is evaluated under this setting. As shown in Figure 3, the accuracy of SaProt decreases when either residue tokens or structure tokens are randomly substituted, which clearly emphasizes the importance of both residue and structure tokens. 5.2 PDB versus AlphaFoldDB For proteins with experimentally determined structures, it is essential to investigate how SaProt performs. To do this, we continuously pre-train SaProt on 60,000 PDB structures, resulting in a variant called SaProt-PDB. We conduct evaluations by assessing SaProt and SaProt-PDB on both AF2 structures and real PDB structures. We did not evaluate all tasks due to lack of many PDB structures on some tasks. Table 4 shows that when trained solely on AF2 structures, the overall accuracy of SaProt is not largely affected by the choice between AF2 structures or PDB structures. However, for SaProt-PDB, it is advisable to use PDB structures directly when available for downstream tasks. This may not have a substantial impact on supervised tasks such as EC and GO, as the model will be retrained on the downstream structures. However, it can have a key impact on the zero-shot task, as indicated by the comparison of 0.454 vs. 0.423 when training/testing data is highly inconsistent for SaProt-PDB. In general, SaProt exhibits slightly better performance on AF2 structures, while SaProt-PDB achieves better accuracy on real structures. This outcome is expected as training and testing are consistent in the optimal performance setting. Note that some protein structures in PDB are not stored in AlphaFoldDB, so the column-level (AlphaFoldDB vs. PDB) comparison in Table 4 does not make much sense. We have released SaProt-PDB weights for utilizing PDB structures. | Model | AF2 | PDB | |-----------|----------------------|----------------------| | | ProteinGym EC GO-MF GO-BP GO-CC | ProteinGym EC GO-MF GO-BP GO-CC | | SaProt | **0.450** 0.882 0.682 0.486 0.479 | 0.423 0.885 0.665 0.460 0.410 | | SaProt-PDB| 0.448 0.880 0.679 0.482 0.472 | **0.454** 0.888 **0.669** **0.465** **0.415** | Table 4: Results of SaProt and SaProt-PDB on AlphaFoldDB and PDB structures. Proteins without PDB structures on the ProteinGym dataset were removed during evaluation. 5.3 VISUALIZATION For a more intuitive comparison, we employ t-SNE to visualize the protein representations generated by the last layer of SaProt and ESM-2. Figure 4 shows the visualization results using the non-redundant version ($PIDE < 40\%$) of the SCOPe (Chandonia et al., 2018) database. For the alpha and beta proteins, the representations generated by ESM-2 are intertwined, whereas those generated by SaProt are separated based on structure type. This observation again underscores SaProt’s capability in discerning structure changes. Furthermore, we visualized the embeddings of all 400 structure-aware tokens (tokens that encompass “#” are ignored). As depicted in Figure 8(c), we can observe a certain degree of clustering phenomenon. In the semantic space, the SA tokens that are in close proximity to each other often correspond to similar types of residues or Foldseek tokens. ![Figure 4: Embedding visualizations of ESM-2 and SaProt on SCOPe database.](image) 6 CONCLUSION In this study, we introduce a novel structure-aware (SA) vocabulary that integrates primary and tertiary protein structure information into the SA-token. This SA-token-based sequence has the potential to serve as a novel protein representation. Building upon this, we train a general-purpose PLM called SaProt, which achieves state-of-the-art performance on 10 protein function prediction tasks. Like the ESM models, SaProt aims to contribute to the advancement of the biological community. This study has several limitations: (1) The performance of the proposed SA vocabulary heavily depends on Foldseek, which aims to balance search efficiency and encoding accuracy. Therefore, there is still room for improving the representation capability of SaProt. (2) Due to computational constraints, the model size of SaProt may not have reached its maximum capacity. (3) In addition to the mentioned tasks, there are other applications that could be explored using the SA vocabulary. For instance, predicting protein complex structures by replacing two protein sequences with SA-token-based sequences could automatically incorporate single-chain structure information. In protein generation tasks, generating SA-token sequences could potentially provide stronger structure constraints during the generation process. These avenues remain open for future research. ACKNOWLEDGMENTS We thank Sergey Ovchinnikov for many insightful suggestions for the research. Sergey advised to evaluate the performance of our model and ESM-2 on the contact map prediction task to evidence our model’s effectiveness in capturing more accurate structure information. Sergey also gave many suggestions for paper writing, figure presentation and research focus. His contributions are significant and should also be listed as an co-author. Due to abstract submission due, we cannot add his name as an author. This work is supported by the National Key Research and Development Program of China (No. 2022ZD0115100), the National Natural Science Foundation of China (No. U21A20427), the Westlake Center of Synthetic Biology and Integrated Bioengineering (WE-SynBio), and the Research Center for Industries of the Future (No.WU2022C030) REFERENCES Ethan C Alley, Grigory Khimulya, Surojit Biswas, Mohammed AlQuraishi, and George M Church. Unified rational protein engineering with sequence-based deep representation learning. Nature methods, 16(12):1315–1322, 2019. José Juan Almagro Armenteros, Casper Kaae Sønderby, Søren Kaae Sønderby, Henrik Nielsen, and Ole Winther. DeepLoc: prediction of protein subcellular localization using deep learning. Bioinformatics, 33(21):3387–3395, 07 2017. ISSN 1367-4803. doi: 10.1093/bioinformatics/btx431. URL https://doi.org/10.1093/bioinformatics/btx431 Ivan Anishchenko, Sergey Ovchinnikov, Hetunandan Kamisetty, and David Baker. Origins of co-evolution between residues distant in protein 3d structures. Proceedings of the National Academy of Sciences, 114(34):9122–9127, 2017. Maxwell L Bileschi, David Belanger, Drew H Bryant, Theo Sanderson, Brandon Carter, D Sculley, Alex Bateman, Mark A DePristo, and Lucy J Colwell. Using deep learning to annotate the protein universe. Nature Biotechnology, 40(6):932–937, 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. John-Marc Chandonia, Naomi K Fox, and Steven E Brenner. SCOPe: classification of large macromolecular structures in the structural classification of proteins—extended database. Nucleic Acids Research, 47(D1):D475–D481, 11 2018. ISSN 0305-1048. doi: 10.1093/nar/gky1134. URL https://doi.org/10.1093/nar/gky1134 Can Chen, Jingbo Zhou, Fan Wang, Xue Liu, and Dejing Dou. Structure-aware protein self-supervised learning, 2023. Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 3438–3445, 2020. Ratul Chowdhury, Nazim Bouatta, Surojit Biswas, Christina Floristean, Anant Kharkar, Koushik Roy, Charlotte Rochereau, Gustaf Ahdritz, Joanna Zhang, George M Church, et al. Single-sequence protein structure prediction using a language model and deep learning. Nature Biotechnology, 40(11):1617–1623, 2022. Christian Dallago, Jody Mou, Kadina E. Johnston, Bruce J. Wittmann, Nicholas Bhattacharya, Samuel Goldman, Ali Madani, and Kevin K. Yang. Flip: Benchmark tasks in fitness landscape inference for proteins. bioRxiv, 2021. doi: 10.1101/2021.11.09.467890. URL https://www.biorxiv.org/content/early/2021/11/11/2021.11.09.467890 Justas Dauparas, Ivan Anishchenko, Nathaniel Bennett, Hua Bai, Robert J Ragotte, Lukas F Milles, Basile IM Wicky, Alexis Courbet, Rob J de Haas, Neville Bethel, et al. Robust deep learning–based protein sequence design using proteinmpnn. Science, 378(6615):49–56, 2022.
ijK5hyxs0n
There are also a variety of non-parametric operations in feed-forward neural networks that are not addressed by the paper. E.g. pooling layers (especially max pooling), atrous convolutions, padding and so on. I can expect these operations can be encoded as some additional indicating features but I wonder if there are better and less artificial solutions.
Graph Metanetworks for Processing Diverse Neural Architectures Derek Lim * MIT CSAIL dereklim@mit.edu Haggai Maron Technion / NVIDIA hmaron@nvidia.com Marc T. Law NVIDIA marcl@nvidia.com Jonathan Lorraine NVIDIA jlorraine@nvidia.com James Lucas NVIDIA jaluca@nvidia.com Abstract Neural networks efficiently encode learned information within their parameters. Consequently, many tasks can be unified by treating neural networks themselves as input data. When doing so, recent studies demonstrated the importance of accounting for the symmetries and geometry of parameter spaces. However, those works developed architectures tailored to specific networks such as MLPs and CNNs without normalization layers, and generalizing such architectures to other types of networks can be challenging. In this work, we overcome these challenges by building new metanetworks — neural networks that take weights from other neural networks as input. Put simply, we carefully build graphs representing the input neural networks and process the graphs using graph neural networks. Our approach, Graph Metanetworks (GMNs), generalizes to neural architectures where competing methods struggle, such as multi-head attention layers, normalization layers, convolutional layers, ResNet blocks, and group-equivariant linear layers. We prove that GMNs are expressive and equivariant to parameter permutation symmetries that leave the input neural network functions unchanged. We validate the effectiveness of our method on several metanetwork tasks over diverse neural network architectures. 1 Introduction Neural networks are well-established for predicting, generating, and transforming data. A newer paradigm is to treat the parameters of neural networks themselves as data. This insight inspired researchers to suggest neural architectures that can predict properties of trained neural networks (Eilertsen et al., 2020), generate new networks (Erkoç et al., 2023), optimize networks (Metz et al., 2022), or otherwise transform them (Navon et al., 2023; Zhou et al., 2023a). We refer to these neural networks that process other neural networks as metanetworks, or metanets for short. Metanets enable new applications, but designing them is nontrivial. A common approach is to flatten the network parameters into a vector representation, neglecting the input network structure. More generally, a prominent challenge in metanet design is that the space of neural network parameters exhibits symmetries. For example, permuting the neurons in the hidden layers of a Multilayer Perceptron (MLP) leaves the network output unchanged (Hecht-Nielsen, 1990). Ignoring these symmetries greatly degrades the metanet performance (Peebles et al., 2022; Navon et al., 2023). Instead, equivariant metanets respect these symmetries, so that if the input network is permuted then the metanet output is permuted in the same way. Recently, several works have proposed equivariant metanets that have shown significantly improved performance (Navon et al., 2023; Zhou et al., 2023a,b). However, these networks typically require highly specialized, hand-designed layers that can be difficult to devise. A careful analysis of its symmetries is necessary for any input architecture, followed by the design of corresponding equivariant metanet layers. Generalizing this procedure to more complicated network architectures is... time-consuming and nontrivial, so existing methods can only process simple input networks with linear and convolutional layers, and cannot process standard modules such as normalization layers or residual connections — let alone more complicated modules such as attention blocks. Moreover, these architectures cannot directly process input neural networks with varying architectures, such as those with different numbers of layers or hidden units. This work offers a simple and elegant solution to metanet design that respects neural network parameter symmetries. As in the concurrent work of Zhang et al. (2023), our technique’s crux is representing an input neural network as a graph (see Figure 1). We show how to efficiently transform a neural network into a graph such that standard techniques for learning on graphs – e.g., Message Passing Neural Networks (Gilmer et al., 2017; Battaglia et al., 2018) or Graph Transformers (Rampášek et al., 2022) – will be equivariant to the parameter symmetries. One of our key contributions is in developing a compact parameter graph representation, which in contrast to established computation graphs allows us to handle parameter-sharing layers like convolutions and attention layers without scaling with the activation count. While past work is typically restricted to processing MLPs and simple Convolutional Neural Networks (CNNs) (LeCun et al., 1989), we validate experimentally that our graph metanets (GMNs) generalize to more complicated networks such as Transformers (Vaswani et al., 2017), residual networks (He et al., 2016), normalization layers (Ioffe & Szegedy, 2015; Ba et al., 2016; Wu & He, 2018), general group-equivariant architectures (Ravanbakhsh et al., 2017) like Deep Sets (Zaheer et al., 2017), and more. We prove theoretically that our metanets are equivariant to permutation symmetries in the input network, which we formulate via neural graph automorphisms (Section 2.2). This generalizes the hidden neuron permutations in MLPs and channel permutations of CNNs covered in prior work to arbitrary feedforward neural architectures. We further prove that our metanets operating on computation graphs are at least as expressive as prior methods — meaning they can approximate them to arbitrary accuracy — and can express the forward pass of any input feedforward neural network, generalizing a result of Navon et al. (2023) (Section 3). Empirical evaluations show that our approach solves a variety of metanet tasks with diverse neural architectures. As part of this effort, we trained new datasets of diverse image classifiers, including 2D CNNs, 1D CNNs, DeepSets, ResNets, and Vision Transformers. Our method is easier to implement than past equivariant metanets while being at least as expressive, and it is applicable to more general input architectures. Crucially, our GMNs achieve strong quantitative performance across all tasks we explored. 2 Graph Automorphism-Based Metanets We first explain how neural networks can be encoded as Directed Acyclic Graphs (DAGs). There are many choices in representing neural networks as DAGs, perhaps the most common being a computation graph (see Appendix C). This work introduces a more compact representation, referred to as parameter graphs. We then introduce one of the paper’s main concepts — Neural DAG Automorphisms. This concept generalizes previously studied symmetry groups for MLPs and CNNs to arbitrary feedforward architectures represented as DAGs. To conclude this section, we describe our GNN-based metanet that operates over these graphs and is equivariant to Neural DAG Automorphisms. A glossary of our notation is provided in Appendix Table 4. Motivation. Certain permutations of parameters in neural networks do not change the function they parameterize. For example, consider a simple MLP defined such that \( f_\theta(x) := W_2 \sigma(W_1 x) \) with one hidden layer, where \( \theta := (W_2, W_1) \) are the parameters of the network, and \( \sigma \) is a nonlinear element-wise activation function. For any permutation matrix \( P \), if we define \( \theta := (W_2 P^\top, PW_1) \), then for all inputs \( x \), we have \( f_\theta(x) = f_{\tilde{\theta}}(x) \). This \( P \) corresponds to a permutation of the order of the hidden neurons, which is well-known not to affect the network function (Hecht-Nielsen [1990]). Likewise, permuting the hidden channels of a CNN does not affect the network function (Navon et al., 2023; Entezari et al., 2022; Ainsworth et al., 2023). While these permutation symmetries for MLPs and simple CNNs are easy to determine by manual inspection, it is more difficult to determine the symmetries of general architectures. For example, simple residual connections introduce additional neuron dependencies across layers. Instead of manual inspection, we show that graph automorphisms (i.e., graph isomorphisms from a graph to itself) on DAGs representing feedforward networks correspond to permutation parameter symmetries. From this observation, it can be shown that GNNs acting on these DAGs are equivariant to their permutation symmetries. Overview of our approach. See Figure 1 for an illustration. Given a general input feedforward neural network, we first encode it as a graph in which each parameter is associated with an edge and then process this graph with a GNN. The GNN outputs a single fixed-length vector or predictions for each node or edge depending on the learning task. For instance, one graph-level task is to predict the scalar accuracy of an input neural network on some task. An edge-level task is to predict new weights for an input neural network to change its functionality somehow. We now discuss the graph construction, the symmetries of these graphs, and the GNN we use. 2.1 Graph construction for general feedforward architectures 2.1.1 Computation graphs Every feedforward neural network defines a computation graph as a DAG (Zhang et al., 2023), where nodes are neurons and edges hold neural network parameter weight values (see Fig. 1 and Fig. 2). Thus, this gives a method to construct a weighted graph. However, the computation graph approach has some downsides. For one, it may be expensive due to weight-sharing: for instance, a 1-input-and-output-channel 2D-convolution layer with a kernel size of 2 has 4 parameters, but the 4 parameters are used in the computation of many activations (e.g., 1024 activations for a 32 × 32 input grid). Further, we may want to add input node and edge features – such as layer number – to help performance and expressive power[1]. Figure 2 illustrates an example of a (small) computation graph for convolutions (for visual clarity, we exclude bias terms). More details, including the exact formal correspondence between feedforward neural network functions and computation graphs, are given in Appendix C.1. Figure 2: An example computation graph for a network with a single convolutional layer. The layer has a 2 × 2 filter kernel, a single input and output channel, and applies the filter with a stride of 2. Even in this small case of a 4 × 4 input image, the graph has 16 edges for only 4 parameters. [1] In Section 3, our proofs rely on these features to show graph metanets can express existing metanets. 2.1.2 Parameter graphs To deal with the challenges of computation graphs, we propose alternate neural network graph constructions — some examples of which are shown in Figure 3 — that are (a) efficient and (b) allow expressive metanets. We call these graphs parameter graphs because we design the graphs so that each parameter is associated with a single edge of the graph (whereas a parameter may be associated to many edges in a computation graph). We design modular subgraphs that can be created for each layer and then stitched together. Our goal is to design parameter graphs with at most one edge for each parameter in the neural network. Additionally, they should capture the correct set of parameter permutation symmetries. Full details are in Appendix B but we discuss the major points behind the design of a selection of parameter graphs here. Linear layers. Figure 1 depicts three linear layers in sequence. Each linear layer’s parameter subgraph coincides with its computation graph, but even so, there are important design choices to be made. Bias parameters could be encoded as node features as done by Zhang et al. (2023) or as self-loop edges on each neuron. Instead, we include a bias node for each layer and encode the bias parameters as edges from that node to the corresponding neurons in the layer. The bias node approach is preferable because the self-loop or node feature approaches to encoding biases can hinder the expressive power of the metanet. In particular, the results of Navon et al. (2023) and Zhou et al. (2023a) show that the permutation equivariant linear metanet maps for MLP inputs include message-passing-like operations where the representation of a bias parameter is updated via the representations of other bias parameters in the same layer. Using a message passing GNN on a graph with bias nodes allows us to naturally express these operations in a single graph metanet layer, as explained in Appendix D.1.3. Convolution layers. Convolutions and other group equivariant linear layers leverage parameter sharing, where the same parameter is used to compute many activations (Ravanbakhsh et al., 2017). Therefore, representing convolutions as a computation graph introduces scaling issues and binds the network graph to a choice of input size. To avoid this, we develop a succinct and expressive parameter graph representation of convolutions. This is depicted in the left of Figure 3 for a convolution layer with a $2 \times 2$ filter, one input channel, and two output channels. Note that we have two output channels here, unlike the computation graph in Figure 2 where there is only one. Our subgraph construction allocates a node for each input and output channel. We then have parallel edges between each input and output node for each spatial location in the filter kernel — making this a multigraph. Bias nodes are added as in the linear layers. This subgraph contains exactly one edge for each parameter in the convolution layer while capturing the parameter permutation symmetries as graph automorphisms. The spatial position of each weight within the kernel is included by a positional encoding in the corresponding edge feature. In Section 4.1, we use our graph metanets to process 1D and 2D convolutional networks, as well as DeepSets networks that consist of permutation equivariant linear layers (Zaheer et al., 2017). Multi-head attention layers. Attention layers (Vaswani et al., 2017) also exhibit parameter sharing across sequence length. As with convolutions, we design an efficient subgraph representation where each parameter appears as a single edge. There is one node for each feature dimension of the input, vectors used in attention computation, and output. There are two sets of edges: a set that corresponds to the query, key, and value maps, and a second set corresponding to the final output mapping. In the middle of Figure 3 we show a single-headed self attention layer, with bias nodes excluded for visual clarity. Generalizing this to multi-head attention simply requires adding additional node features to the middle layer that indicate which head each node belongs to. Residual layers. A residual connection does not introduce any additional parameters, but it does affect the permutation symmetries in the parameter space of the network. Therefore, it is crucial to represent residual connections as additional parameter-free edges within the parameter graph. The top-right of Figure 5 shows a residual connection bypassing a linear layer. The edges are drawn as dashed lines to emphasize that there are no associated parameters. As is natural in the computation graph, we fix the weight of the residual edge to be 1. 2.2 Neural DAG Automorphisms The prior section describes how to represent (feedforward) neural networks as DAGs. A natural question from an equivariant learning perspective is: what are the symmetries of this DAG representation? Specifically, we consider graph automorphisms, which are structure-preserving transformations of a graph unto itself. A neural DAG automorphism of a DAG \((V, E)\) associated with a neural network is a permutation of nodes \(\phi : V \rightarrow V\) that preserves adjacency, preserves types of nodes (e.g., \(\phi\) cannot map hidden neurons to input neurons), and preserves weight-sharing constraints (i.e., tied weights must still be tied after permuting the endpoints with the automorphism); see Appendix C.2 for more details. Every automorphism \(\phi\) also induces a permutation of edges \(\phi^e : E \rightarrow E\), where edge \((i, j)\) is mapped to \(\phi^e((i, j)) = (\phi(i), \phi(j))\). Intuitively, a neural DAG automorphism represents a permutation of the neural network parameters via the induced edge permutation, \(\phi^e\). We write \(\Phi(\theta)\) to represent this permutation on the parameters themselves, meaning \(\Phi(\theta)(\phi(i), \phi(j)) = \theta_{(i,j)}\). Hidden node permutations in MLPs and hidden channel permutations in CNNs are special cases of neural DAG automorphisms, which we explain in Appendix C.2. To formalize these notions, we show that every neural DAG automorphism \(\phi\) of a computation graph is a permutation parameter symmetry, in the sense that the induced parameter permutation \(\Phi\) does not change the neural network function. Figure 4 illustrates several neural DAG automorphisms. **Proposition 1.** For any neural DAG automorphism \(\phi\) of a computation graph, the neural network function is left unchanged: \(\forall x \in X, f_\theta(x) = f_{\Phi(\theta)}(x)\). A proof is given in Appendix C.4. Recall that our goal is to design metanets equivariant to parameter permutation symmetries. Proposition 1 shows to achieve this it is necessary to design metanets that are equivariant to neural DAG automorphisms. Graph metanets achieve this exactly since GNNs are equivariant to permutation symmetries of graphs (Maron et al., 2019). **Proposition 2.** Graph metanets are equivariant to parameter permutations induced by neural DAG automorphisms. These results formally justify using graph metanets on computation graphs for equivariance to parameter permutation symmetries. Now, noting the parameters are stored as edge features in our computation and parameter graphs, we design graph neural networks that operate on these DAGs. 2.3 Formulating Metanets as GNNs After constructing the input graphs, we use a GNN as our metanet to learn representations and perform downstream tasks. We desire GNNs that learn edge representations since the input neural network parameters are placed on the edges of the constructed graph. While countless GNN variants have been developed in the last several years (Hamilton, 2020), most learn node representations. For simplicity, we mostly loosely follow the general framework of Battaglia et al. (2018), which defines general message-passing GNNs that update node, edge, and global features. For a graph, let \( v_i \in \mathbb{R}^{d_v} \) be the feature of node \( i \), \( e_{(i,j)} \in \mathbb{R}^{d_e} \) a feature of the directed edge \( (i,j) \), \( u \in \mathbb{R}^{d_u} \) be a global feature associated to the entire graph, and let \( E \) be the set of edges in the graph. The directed edge \( (i,j) \) represents an edge starting from \( j \) and ending at \( i \). We allow multigraphs, where there can be several edges (and hence several edge features) between a pair of nodes \( (i,j) \); thus, we let \( E_{(i,j)} \) denote the set of edge features associated with \( (i,j) \). Then, a general GNN layer updating these features can be written as: \[ v_i \leftarrow \text{MLP}_2^v \left( v_i, \sum_{j,e_{(i,j)} \in E_{(i,j)}} \text{MLP}_1^v(v_i, v_j, e_{(i,j)}, u), u \right) \] \[ e_{(i,j)} \leftarrow \text{MLP}_e^e(v_i, v_j, e_{(i,j)}, u) \] \[ u \leftarrow \text{MLP}_u^u \left( \sum_i v_i, \sum_{e \in E} e, u \right) \] Intuitively, node features are updated by message passing along neighbors (Equation 1), the features of adjacent nodes update the features of the edges connecting them (Equation 2), and the global feature \( u \) is updated with aggregations of all features (Equation 3). While our graphs are DAGs, we are free to use undirected edges by ensuring that \( (i,j) \in E \) implies \( (j,i) \in E \) with \( e_{(i,j)} = e_{(j,i)} \). We often choose to allow message passing between layers in both directions. For parameter-level metanet tasks with per-parameter predictions, we let the prediction be the final feature \( e_{(i,j)} \) of the parameter’s corresponding edge. We pool edge features for network-level metanet tasks where a fixed-length vector is the final prediction for each graph. We can pool node features but found pooling edge features sufficient. We do not use global features for empirical results, but we find them crucial for the expressive power results of Proposition 3. 3 Expressive Power of Graph Metanets (GMNs) Ideally, one does not sacrifice expressive power when restricting a neural architecture to satisfy group equivariance constraints. We want our metanet architecture to be powerful enough to learn useful functions of network parameters. To this end, we first show that GMNs can express two existing equivariant metanets on MLP inputs. Consequently, GMNs are at least as expressive as these approaches. **Proposition 3.** On MLP inputs (where parameter graphs and computation graphs coincide), graph metanets can express StatNN (Unterthiner et al., 2020) and NP-NFN (Zhou et al., 2023a). The proof is given in Appendix D.1. StatNN is based on permutation-invariant statistics of each weight or bias, which graph metanetworks can easily compute. The linear layers of NP-NFN consist of local message-passing-like operations and global operations that the global feature can capture. Second, we show that GMNs can simulate the forward pass of an input neural network represented by the DAG of its computation graph (Appendix C.1). This substantially generalizes a result of Navon et al. (2023), who show that their DWSNets can simulate the forward pass of MLPs. **Proposition 4.** On computation graph inputs, graph metanets can express the forward pass of any input feedforward neural network as defined in Section C.1. Table 1: Results for predicting the test accuracy of input neural networks trained on CIFAR-10. The top results use a training set of 15,000 uniformly selected input networks, the middle results use 10% of this training set, and the bottom results only train on input networks of low hidden dimension (while testing on networks with strictly higher hidden dimension). Our method performs best in all setups, with increasing benefits in the low-data and OOD regimes. | Metanet | Varying CNNs | Diverse Architectures | |---------------|--------------|-----------------------| | | $R^2$ | $\tau$ | $R^2$ | $\tau$ | | **50%** | | | | | | DeepSets | Zaheer et al. [2017] | .778±.002 | .697±.002 | .562±.020 | .559±.011 | | DMC | Eilertsen et al. [2020] | .948±.009 | .876±.003 | .957±.009 | .883±.007 | | GMN (Ours) | | .978±.002 | .915±.006 | .975±.002 | .908±.004 | | **5%** | | | | | | DeepSets | Zaheer et al. [2017] | .692±.006 | .648±.002 | .126±.015 | .290±.010 | | DMC | Eilertsen et al. [2020] | .816±.038 | .762±.014 | .810±.046 | .758±.013 | | GMN (Ours) | | .876±.010 | .797±.005 | .918±.002 | .828±.005 | | **OOD** | | | | | | DeepSets | Zaheer et al. [2017] | .741±.015 | .683±.005 | .128±.071 | .380±.014 | | DMC | Eilertsen et al. [2020] | .387±.229 | .760±.024 | −.134±.147 | .566±.055 | | GMN (Ours) | | .891±.037 | .870±.010 | .768±.063 | .780±.030 | Figure 5: Histograms of CIFAR-10 accuracies for our Varying CNNs and Diverse Architectures datasets. Left and middle show train and test accuracy for the two datasets. Right shows test accuracy of Diverse Architectures split by model type. The above result applies only to GMNs operating on computation graphs. This makes it directly applicable to the parameter graphs for MLPs as they coincide with the computation graph. However, we expect that a similar result is possible for more general parameter graphs. We leave formal proof of this to future work. 4 EXPERIMENTS 4.1 PREDICTING ACCURACY FOR VARYING ARCHITECTURES Task. As in prior works (Unterthiner et al., 2020; Zhou et al., 2023a), we train metanets to predict the test accuracy of input neural networks. We consider image classification neural networks trained on the CIFAR-10 dataset (Krizhevsky, 2009), and the metanet task is to take the parameters of an input network and predict the network’s image classification accuracy on the CIFAR-10 test set. Datasets. To demonstrate the flexibility of our graph metanets, we train image classifiers that significantly vary in size and architecture. For our “Varying CNNs” dataset, we train about 30,000 basic 2D CNNs varying in common architectural design choices like hidden dimension, number of convolution layers, number of fully connected layers, and type of normalization layers (BatchNorm (Ioffe & Szegedy, 2015) or GroupNorm (Wu & He, 2018)). For our “Diverse Architectures” dataset, we also train four other diverse image classifiers to test our model’s generalization with multiple architectures: (i) basic 1D CNNs treating images as raster ordered sequences of pixels, (ii) DeepSets treating images as pixel sets, which maintains pixel position information with positional encodings (Zaheer et al., 2017), (iii) ResNets (He et al., 2016), and (iv) Vision Transformers (Dosovitskiy et al., 2021). This dataset also totals about 30,000 networks. The Vision Transformers’ patch embedding module can be viewed as a convolution, which is how we encode this module as a graph (the Transformer layer graph encodings are described in Figure 7). Figure 5 shows these trained networks span a wide range of accuracies, between 10% and 77.5% for Varying CNNs and between 7.5% and 87.8% for Diverse Architectures. Also, the number of parameters in these networks ranges from 970 to 21,165 in Varying CNNs and from 970 to 87,658 for Diverse Architectures. These networks are significantly more diverse and achieve higher accuracy than the dataset of Unterthiner et al. (2020), who train small CNNs of a fixed architecture that obtain at most 56% test accuracy on CIFAR-10. Also, our Diverse Architectures dataset contains many more network types and modules than the datasets of Eilertsen et al. (2020) and Schürholt et al. (2022), which are limited to CNNs. **Metanetworks.** For our graph metanet, we consider a simple message passing GNN as in Section 2.3 that does not use a global graph feature. To obtain an invariant prediction, we mean-pool over edge representations. We cannot apply competing permutation equivariant methods like DWSNet (Navon et al., 2023) or NFN (Zhou et al., 2023a), because they cannot process normalization layers, input neural networks of different sizes, or modules like self-attention. Instead, as a baseline, we consider the Deep Meta Classifier (DMC) from Eilertsen et al. (2020), which vectorizes an input network’s parameters and processes it with a 1D CNN, allowing the use of differently sized networks. We also consider a baseline that treats the parameters as a set and applies a DeepSets network (Zaheer et al., 2017) to output a scalar prediction. Note that the DeepSets baseline is invariant to permutation parameter symmetries, but it is also invariant to permutations that do not correspond to parameter symmetries (which significantly outnumber permutation parameter symmetries), so it has low expressive power. We evaluate our method and the two baselines across six different data settings. Using both the Varying CNN dataset and the Diverse Architectures dataset. We explore training on about half of the input networks, training on only 10% of this previous split, and an out-of-distribution (OOD) generalization setting where we train on a reduced set of architectures that have smaller hidden dimension than the held-out architectures. **Results.** See Table 1 for quantitative results and Figure 8 in the Appendix for scatterplots in the OOD setting on Diverse Architectures. We report the R-Squared value and the Kendall $\tau$ coefficient of the predicted generalization against the true generalization. Our GMNs outperform both baselines in predicting accuracy for input networks across all six data settings. When we restrict to 10% of the full training set size, we see that GMNs generalize substantially better than the baselines. This performance gap is maintained in the more challenging OOD generalization setting, where the non-equivariant DMC performs very poorly in $R^2$. The improved GMN performance could be from high expressive power (which the DeepSets baseline lacks), with better generalization due to equivariance to parameter permutation symmetries (which DMC lacks). ### 4.2 Editing 2D INRs Table 2: Test MSE (lower is better) for editing 2D INRs, following the methodology of (Zhou et al., 2023a). Results of baselines are from (Zhou et al., 2023a(b)). | Metanetwork | Contrast | Dilate | |-------------|----------|--------| | MLP | .031 | .306 | | MLP-Aug | .029 | .307 | | NFN-PT | .029 | .197 | | NFN-HNP | .0204 ±.0000 | .0706 ±.0005 | | NFN-NP | .0203 ±.0000 | .0693 ±.0009 | | NFT | .0200 ±.0002 | .0510 ±.0004 | | GMN (ours) | .0197 ±.0000 | .0603 ±.0010 | Table 3: Results for self-supervised learning of neural net representations, in test MSE of a linear regressor on the learned representations. Numbers besides GMN from (Navon et al., 2023). | Metanetwork | Test MSE | |----------------------|----------| | MLP | 7.39 ±.19 | | MLP + Perm. aug | 5.65 ±.01 | | MLP + Alignment | 4.47 ±.15 | | INR2Vec (Arch.) | 3.86 ±.32 | | Transformer | 5.11 ±.12 | | DWSNets | 1.39 ±.06 | | GMN (ours) | 1.13 ±.08 | Next, we empirically test the ability of GMNs to process simple MLP inputs to compare against less-flexible permutation equivariant metanets such as NFN (Zhou et al., 2023a) and NFT (Zhou et al., 2023b). For this, we train metanetworks on the 2D INR editing tasks of (Zhou et al., 2023a), where the inputs are weights of an INR representing an image, and the outputs are weights of an INR representing the image with some transformation applied to it. Table 2 shows our results. We see that our GMNs outperform most metanetworks on these simple input networks. In particular, GMN outperforms all methods on the Contrast task, and is only beat by NFT on the Dilate task. 4.3 Self-Supervised Learning with INRs We also compare GMNs against another less-flexible permutation equivariant metanet, DWS-Nets (Navon et al., 2023), in the self-supervised learning experiments of Navon et al. (2023). Here, the input data are MLPs fit to sinusoidal functions of the form \( x \mapsto a \sin(bx) \), where \( a, b \in \mathbb{R} \) are varying parameters. The goal is to learn a metanet encoder that gives strong representations of input networks, using a contrastive-learning framework similar to SimCLR (Chen et al., 2020). Our GMNs learn edge representations, which are then mean-pooled to get a vector representation of each input network. The downstream task for evaluating these representations is fitting a linear model on the metanet representations to predict the coefficients \( a \) and \( b \) of the sinusoid that the input MLP represents. Table 3 shows the results. GMNs outperform all baselines in this task. Thus, even in the most advantageous setting for competitors, which they are restricted to, GMNs are empirically successful at metanet tasks. 5 Conclusion In this work, we proposed Graph Metanetworks, an approach to processing neural networks with theoretical and empirical benefits. Theoretically, our approach satisfies permutation parameter symmetry equivariance while having provably high expressive power. Empirically, we can process diverse neural architectures, including layers that appear in state-of-the-art models, and we outperform existing metanetwork baselines across all tasks that we evaluated. Limitations We make substantial progress towards improving the scalability of GMNs by introducing parameter graphs. However, large neural networks can have billions of parameters and processing them may be more difficult. We believe that standard scalable GNN methods can be used to scale to the billion parameter regime (as existing GNNs are capable of processing graphs with billions of edges (Hu et al., 2021) using modest computing resources), but we have not yet tried to use Graph Metanets at such a scale. Additionally, we argue that parameter graphs are easier to design than specialized architectures of prior work, but we do not give formal constraints on their design. Further work investigating parameter graphs is promising; for instance, our theory of neural DAG automorphisms depends on the more expensive computation graphs, but it could possibly be extended to parameter graphs. Moreover, our approach only accounts for permutation-based parameter symmetries and does not account for e.g. symmetries induced by scaling weights in ReLU networks (Dinh et al., 2017; Godfrey et al., 2022). Future work Our graph-based approach to metanets is promising for future development. This paper mostly uses a basic message-passing GNN architecture, which can be further improved using the many GNN improvements in the literature. Furthermore, our theoretical developments largely apply to computation graphs and we expect future work can extend these results to the more practical parameter graphs. Since graph metanets can process modern neural network layers like self-attention and spatial INR feature grids, they can be used to process and analyze state-of-the-art neural networks. Further, future work could try applying graph metanets to difficult yet impactful metanets tasks such as pruning, learned optimization, and finetuning pretrained models. Acknowledgments We thank Matan Atzmon, Jiahui Huang, Karsten Kreis, Francis Williams, and Xiaohui Zeng for helpful comments. We would also like to thank Jun Gao, Or Perel, and Frank Shen for helpful input on some of our early INR experiments. DL is funded by an NSF Graduate Fellowship. HM is the Robert J. Shillman Fellow, and is supported by the Israel Science Foundation through a personal grant (ISF 264/23) and an equipment grant (ISF 532/23). REFERENCES Samuel Ainsworth, Jonathan Hayase, and Siddhartha Srinivasa. Git re-basin: Merging models modulo permutation symmetries. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=CQsmMYmlP5T. Bruno Andreis, Soro Bedionita, and Sung Ju Hwang. Set-based neural network encoding. arXiv preprint arXiv:2305.16625, 2023. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. Matthias Bauer, Emilien Dupont, Andy Brock, Dan Rosenbaum, Jonathan Schwarz, and Hyunjik Kim. Spatial functa: Scaling functa to imagenet classification and generation. arXiv preprint arXiv:2302.03130, 2023. Léon Bottou and Patrick Gallinari. A framework for the cooperation of learning algorithms. Advances in neural information processing systems, 3, 1990. Adriano Cardace, Pierluigi Zama Ramirez, Francesco Ballerini, Allan Zhou, Samuele Salti, and Luigi Di Stefano. Neural processing of tri-plane hybrid neural fields. arXiv preprint arXiv:2310.01140, 2023. Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16123–16133, 2022. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597–1607. PMLR, 2020. Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. Cameron Diao and Ricky Loynd. Relational attention: Generalizing transformers for graph-structured tasks. In The Eleventh International Conference on Learning Representations, 2023. Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In International Conference on Machine Learning, pp. 1019–1028. PMLR, 2017. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=YicbFdNTTy. Emilien Dupont, Hyunjik Kim, SM Eslami, Danilo Rezende, and Dan Rosenbaum. From data to functa: Your data point is a function and you can treat it like one. International Conference on Machine Learning (ICML), 2022. Gabriel Eilertsen, Daniel Jönsson, Timo Ropinski, Jonas Unger, and Anders Ynnerman. Classifying the classifier: dissecting the weight space of neural networks. Proceedings of the European Conference on Artificial Intelligence, 2020. Rahim Entezari, Hanie Sedghi, Olga Saukh, and Behnam Neyshabur. The role of permutation invariance in linear mode connectivity of neural networks. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=dNigytemkL.
fDaLmkdSKU
Also, take the extreme case of a single arbitrarily chosen Gaussian kernel. Is there not some problem with fragility/generalizability? Performance on untrained data? How does this approach account for the robustness of the parameterization? Maybe this relates to the question of which topology is being used to characterize Lipschitz continuity.
NEAR-OPTIMAL SOLUTIONS OF CONSTRAINED LEARNING PROBLEMS Juan Elenter * University of Pennsylvania Luiz F. O. Chamon University of Stuttgart Alejandro Ribeiro University of Pennsylvania ABSTRACT With the widespread adoption of machine learning systems, the need to curtail their behavior has become increasingly apparent. This is evidenced by recent advancements towards developing models that satisfy robustness, safety, and fairness requirements. These requirements can be imposed (with generalization guarantees) by formulating constrained learning problems that can then be tackled by dual ascent algorithms. Yet, though these algorithms converge in objective value, even in non-convex settings, they cannot guarantee that their outcome is feasible. Doing so requires randomizing over all iterates, which is impractical in virtually any modern applications. Still, final iterates have been observed to perform well in practice. In this work, we address this gap between theory and practice by characterizing the constraint violation of Lagrangian minimizers associated with optimal dual variables, despite lack of convexity. To do this, we leverage the fact that non-convex, finite-dimensional constrained learning problems can be seen as parametrizations of convex, functional problems. Our results show that rich parametrizations effectively mitigate the issue of feasibility in dual methods, shedding light on prior empirical successes of dual learning. We illustrate our findings in fair learning tasks. 1 INTRODUCTION Machine learning (ML) has become a core technology of information systems, reaching critical applications from medical diagnostics (Engelhard et al., 2023) to autonomous driving (Kiran et al., 2021). Consequently, it has become paramount to develop ML systems that not only excel at their main task, but also adhere to requirements such as fairness and robustness. Since virtually all ML models are trained using empirical risk minimization (ERM) (Vapnik, 1999), a natural way to impose requirements is to explicitly add constraints to these optimization problems (Cotter et al., 2018; Chamon & Ribeiro, 2020; Velloso & Van Hentenryck, 2020; Fioretto et al., 2021; Chamon et al., 2023). Recent works (Chamon & Ribeiro, 2020; Chamon et al., 2023) have shown that from a probably approximately correct (PAC) perspective, constrained learning is essentially as hard as classical learning and that, despite non-convexity, it can be tackled using dual algorithms that only involve a sequence of regularized, unconstrained ERM problems. This approach has been used in several domains, such as federated learning (Shen et al., 2022), fairness (Cotter et al., 2019; Tran et al., 2021), active learning (Elenter et al., 2022), adversarial robustness (Robey et al., 2021), and data augmentation (Hounie et al., 2022). These theoretical works, however, only address (i) the estimation error, arising from the empirical approximation of expectations in ERM and (ii) the approximation error, arising from using finite-dimensional models with limited functional representation capability. These are the leading challenges in unconstrained learning since the convergence properties of unconstrained optimization algorithms are well-understood in convex (e.g., (Bertsekas, 1997; Boyd & Vandenberghe, 2004)) as well as many non-convex settings (e.g., for overparametrized models (Soltanolkotabi et al., 2018; Brutzkus & Globerson, 2017; Ge et al., 2017)). This is not the case in constrained learning, where (iii) the optimization error can play a crucial role. *Corresponding author: elenter@seas.upenn.edu Indeed, dual methods are severely limited when it comes to recovering feasible solutions for constrained problems. In fact, not only might their primal iterates not converge to a feasible point [e.g., Fig. 1 or (Cotter et al., 2019, Section 6.3.1)], but they might not converge at all, displaying a cyclostationary behavior instead. This problem is hard even from an algorithmic complexity point-of-view (Daskalakis et al., 2021). For convex problems, this issue can be overcome by simply averaging the iterates (Nedić & Ozdaglar, 2009). Non-convex problems, however, require randomization (Kearns et al., 2018; Agarwal et al., 2018; Goh et al., 2016; Chamon et al., 2023). This approach is not only impractical, given the need to store a growing sequence of primal iterates, but also raises ethical considerations, since randomization further hinders explainability. Yet, it has been observed that for typical modern ML tasks, taking the last or best iterate can perform well in practice (Cotter et al., 2018; Chamon & Ribeiro, 2020; Chamon et al., 2023; Robey et al., 2021; Elenter et al., 2022; Hounie et al., 2022; Shen et al., 2022; Gallego-Posada et al., 2022). This work addresses this gap between theory and practice by characterizing the sub-optimality and infeasibility of primal solutions associated with optimal dual variables. To do so, we observe that, though non-convex, constrained learning problems are generally parametrized versions of benign functional optimization problems. We then show that for sufficiently rich parametrizations, solutions obtained by dual algorithms closely approximate these functional solutions, not only in terms of optimal value as per (Cotter et al., 2019; Chamon & Ribeiro, 2020; Chamon et al., 2023), but also in terms of constraint satisfaction. This implies that dual ascent methods yield near-optimal and near-feasible solutions without randomization, despite non-convexity. ## 2 Constrained Learning ### 2.1 Statistical Constrained Risk Minimization As in classical learning, constrained learning tasks can be formulated as a statistical optimization problem, namely, $$P_p^* = \min_{\theta \in \Theta} \ell_0(f_\theta) := \mathbb{E}_{(x,y)}[\tilde{\ell}_0(f_\theta(x), y)]$$ subject to $$\ell_i(f_\theta) := \mathbb{E}_{(x,y)}[\tilde{\ell}_i(f_\theta(x), y)] \leq 0, \quad i = 1, \ldots, m$$ where $f_\theta : X \to Y$ is a function associated with the parameter vector $\theta \in \Theta \subseteq \mathbb{R}^p$ and the hypothesis class $\mathcal{F}_\theta = \{f_\theta : \theta \in \Theta\}$ induced by this family of functions is assumed to be a subset of some compact functional space $\mathcal{F} \subset L^2$. Throughout the paper, we use the subscript $p$ (parametrized) to refer to quantities related to $(P_p)$. The functionals $\ell_i : \mathcal{F} \to \mathbb{R}, i = 0, \ldots, m$, denote expected risks for loss functions $\tilde{\ell}_i$. In this setting, $\ell_0$ can be interpreted as a top-line metric (e.g., accuracy), while the functionals $\ell_1, \ldots, \ell_m$ encode statistical requirements that the solution must satisfy (see example below). **Example 2.1:** Learning under counterfactual fairness constraints. Consider the problem of learning an accurate classifier that is insensitive to changes in a set of protected attributes. Due to the correlation between these attributes and other features, simply hiding them from the model is not enough to guarantee this insensitivity. To do so, this requirement must be enforced explicitly. Indeed, consider the COMPAS study (ProPublica, 2020), with the goal of predicting recidivism based on past offense data while controlling for gender and racial bias. Explicitly, let $\tilde{\ell}_0$ denote the cross-entropy loss $\ell_0(\hat{y}, y) = -\log[\hat{y}]_y$. By collecting the protected features into the separate vector $z$, i.e., $x = [\tilde{x}, z]$, we can formulate the problem of learning a predictor insensitive to transformations $\rho_i$ that encompass all possible single variable modifications of $z$. Explicitly, $$\min_{\theta \in \mathbb{R}^p} \mathbb{E}\left[\tilde{\ell}_0(f_\theta(x), y)\right]$$ subject to $$\mathbb{E}\left[D_{KL}(f_\theta(\tilde{x}, z) || f_\theta(\tilde{x}, \rho_i(z)))\right] \leq c, \quad i = 1, \ldots, m,$$ where $c > 0$ is the desired sensitivity level. Note that this formulation corresponds to the notion of (average) counterfactual fairness from (Kusner et al., 2018, Definition 5). In this setting, each constraint represents a requirement that the output of the classifier be near-invariant to changes in the protected features (here, gender and race). For instance, the prediction should be (almost) the Figure 1: Feasibility of primal iterates in a constrained learning problem with fairness requirements. **Left**: Example of a hard constraint which oscillates between feasibility and infeasibility, and an easy constraint which remains feasible for all iterations. **Right**: After training accuracy has settled (around half of training epochs), all but the last constraint are infeasible 30-45% of the iterations. In fact, at least one constraint is violated on 85% of the iterations shown. We cannot therefore stop the algorithm and expect to obtain a feasible solution. same whether, all else being equal, the gender of the input is changed from “Male” to “Female” (and vice-versa) or the race is changed from “Caucasian” to “African-American.” Note that even if the losses $\tilde{\ell}_i(\hat{y}, y)$ are convex in $(\hat{y}, y)$ (as is the case of the cross-entropy), the functions $\ell_i$ need not be convex in $\theta$. This is the case, for instance, for typical modern ML models (e.g., if $f_\theta$ is a neural network, NN). Hence, $(P_p)$ is usually a non-convex optimization problem for which there is no straightforward way to project onto the feasibility set (e.g., onto the set of fair NNs). In light of these challenges, we turn to Lagrangian duality. ### 2.2 Learning in the Dual Domain Let the Lagrangian $L : \mathcal{F} \times \mathbb{R}_+^m \to \mathbb{R}$ be defined as $$L(\phi, \lambda) = \ell_0(\phi) + \lambda^T \ell(\phi),$$ where $\ell = [\ell_1, ..., \ell_m]$ is a vector-valued functional collecting the constraints of $(P_p)$. For reasons that will become apparent later, we define $L$ over $\mathcal{F} \supseteq \mathcal{F}_\theta$. For a fixed dual variable $\lambda_p$, the Lagrangian $L(f_\theta, \lambda_p)$ is a regularized version of $(P_p)$, where $\ell$ acts as the regularizing functional. This leads to the dual function $$g_p(\lambda_p) = \min_{\theta \in \Theta} L(f_\theta, \lambda_p),$$ based on which we can in turn define the dual problem of $(P_p)$ as $$D_p^* = \max_{\lambda_p \geq 0} g_p(\lambda_p).$$ This saddle-point problem can be viewed as a two-player game or as a regularized learning problem, where the regularization parameter is also an optimization variable. As such, $(D_p)$ is a relaxation of $(P_p)$, implying that $D_p^* \leq P_p^*$. This is known as weak duality (Bertsekas, 1997). The dual function $g_p$ in (2) is concave, irrespective of whether $(P_p)$ is convex (it is the pointwise minimum of a family of affine functions on $\lambda$ (Boyd & Vandenberghe, 2004)). As such, though $g_p$ may not be differentiable, it can be equipped with supergradients that provide potential ascent directions. Explicitly, a vector $s \in \mathbb{R}^m$ is a supergradient of the concave function $h : \mathbb{R}^m \to \mathbb{R}$ at a point $x$ if $h(z) - h(x) \geq s^T(z - x)$ for all $z$. The set of all supergradients of $h$ at $x$ is called the superdifferential and is denoted $\partial h(x)$. When the losses $\ell_i$ are continuous, the superdifferential of $g_p$ admits a simple description (Nedić & Ozdaglar, 2009), namely, $$\partial g_p(\lambda_p) = \text{conv} \left[ \ell(f_\theta(\lambda_p)) : f_\theta(\lambda_p) \in \mathcal{F}_\theta^*(\lambda_p) \right],$$ where $\text{conv}(S)$ denotes the convex hull of the set $S$ and $\mathcal{F}_\theta^*(\lambda_p)$ denotes the set of Lagrangian minimizers $f_\theta(\lambda_p)$ associated to the multiplier $\lambda_p$, i.e., $$\mathcal{F}_\theta^*(\lambda_p) = \arg \min_{\theta \in \Theta} L(f_\theta, \lambda_p).$$ Algorithm 1 Dual Constrained Learning 1: Inputs: number of iterations $T \in \mathbb{N}$, step size $\eta > 0$. 2: Initialize: $\lambda(1) = 0$ 3: for $t = 1, \ldots, T$ do 4: Obtain $f_\theta(t)$ such that $$f_\theta(t) \in \arg\min_{\theta \in \Theta} \ell_0(f_\theta) + \lambda(t)^T \ell(f_\theta) = \arg\min_{\theta \in \Theta} L(f_\theta, \lambda(t))$$ 5: Update dual variables $$\lambda_i(t + 1) = \max \left[0, \lambda_i(t) + \eta \ell_i(f_\theta(t))\right]$$ 6: end for In particular, this leads to an algorithm for solving $(D_p)$ known as projected supergradient ascent (Polyak, 1987; Shor, 2013) that we summarize in Algorithm 1. When executing Algorithm 1, dual iterates $\lambda_p(t)$ move in ascent directions of the concave function $g_p$ (Shor, 2013, Section 2.4). Yet, the sequence of primal iterates $\{f_\theta(t)\}_{t=1}^T$ obtained as a by-product need not approach the set of solutions of $(P_p)$. The experiment in Figure 1 showcases this behaviour and illustrates that, in general, one cannot simply stop the dual ascent algorithm at any iteration $t$ and expect the primal iterate $f_\theta(t)$ to be feasible. Additionally, the Lagrangian minimizers are not unique. In particular, for an optimal dual variable $\lambda_p^\star \in \Lambda_p^\star$, the set $F_p^\star(\lambda_p^\star)$ is typically not a singleton and could contain infeasible elements (i.e., $\ell_i(f_\theta(\lambda_p^\star)) > 0$ for some $i \geq 1$). Even more so, as $\lambda_p(t)$ approaches $\Lambda_p^\star$, the constraint satisfaction of primal iterates can exhibit pathological cyclostationary behaviour, where one or more constraints oscillate between feasibility and infeasibility, see e.g., (Cotter et al., 2019, Section 6.3.1). For these reasons, convergence guarantees for non-convex optimization problems typically require randomization over (a subset of) the sequence $\{f_\theta(t)\}_{t=1}^T$, which is far from practical [see e.g., (Agarwal et al., 2018, Theorem 2), (Kearns et al., 2018, Theorem 4.1), (Cotter et al., 2019, Theorem 2), (Chamon et al., 2023, Theorem 3)]. In the sequel, we show conditions under which this is not necessary. 3 Near-Optimal Solutions of Constrained Learning Problems Primal iterates obtained as a by-product of the dual ascent method in Algorithm 1 may fail to be solutions of $(P_p)$. However, it has been observed that taking the last or best iterate can perform well in practice. This can be understood by viewing $(P_p)$ as the parametrized version of a benign functional program, amenable to a Lagrangian formulation. This unparametrized problem does not suffer from the same limitations as $(P_p)$ in terms of primal recovery and we can thus use its solution as a reference point to measure the sub-optimality of the primal iterates obtained with Algorithm 1. The unparametrized constrained learning problem is defined as $$P_u^\star = \min_{\phi \in F} \ell_0(\phi)$$ subject to $\ell_i(\phi) \leq 0, \quad i = 1, \ldots, m \quad (P_u)$$ where $F$ is a convex, compact subset of an $L^2$ space. For instance, $F$ can be a subset of the space of continuous functions or a reproducing kernel Hilbert space (RKHS) and $F_\theta$ can be induced by a neural network architecture with smooth activations or a finite linear combinations of kernels. In both cases, we know that $F_\theta$ can uniformly approximate $F$ arbitrarily well as the dimension of $\theta$ grows (Hornik, 1991; Berlinet & Thomas-Agnan, 2011). The smallest choice of $F$ is in fact $\text{conv}(F_\theta)$ (closed convex hull of $F_\theta$). Analogous to the definitions from Section 2.1, $$g_u(\lambda_u) := \min_{\phi \in F} L(\phi, \lambda_u)$$ denotes the unparametrized dual function, \( F^*(\lambda_u) = \arg\min_{\phi \in F} L(\phi, \lambda_u) \) is the set of Lagrangian minimizers \( \phi(\lambda_u) \) associated to \( \lambda_u \) and \[ D_u^* = \max_{\lambda_u \geq 0} g_u(\lambda_u) \] \( (D_u) \) is the unparametrized dual problem. The subscript \( u \) is used to denote quantities related to the unparametrized problem \( (P_u) \). We now present two assumptions that allow us to characterize the relation between the dual and primal solutions of problem \( D_u \). **Assumption 3.1.** The functionals \( \ell_i, i = 0, \ldots, m \), are convex and \( M \)-Lipschitz continuous in \( F \). Additionally, \( \ell_0 \) is \( \mu_0 \)-strongly convex. Note that we require convexity of the losses with respect to their functional arguments and not model parameters \( \theta \), which holds for most typical losses, e.g., mean squared error and cross-entropy loss. **Assumption 3.2.** There exists \( \phi \in F \) such that \( \ell(\phi) < \min[0, \ell(\phi(\lambda_p^*)), \ell(f_\theta(\lambda_p^*))] \) for all \( \phi(\lambda_p^*) \in F(\lambda_p^*), f_\theta(\lambda_p^*) \in F_\theta(\lambda_p^*) \) and \( \lambda_p^* \in \Lambda_p^* \). Assumption 3.2 is a stronger version of Slater’s constraint qualification, which requires only \( \ell(\phi) < 0 \). Here, we require the existence of a (suboptimal) candidate \( \phi \) that is strictly feasible even for perturbed versions of \( (P_u) \). Under these assumptions, the Lagrangian minimizer is unique. This makes the superdifferential of the dual function a singleton at every \( \lambda_u \): \( \partial g_u(\lambda_u) = \{\ell(\phi(\lambda_u))\} \), which means that the dual function \( g_u(\lambda_u) \) is differentiable (Shor, 2013). Let \( \phi^* \) be a solution of problem \( (P_u) \). Assumptions 3.1 and 3.2 imply that strong duality (i.e., \( P_u^* = D_u^* \)) holds in this problem, and that at \( \lambda_u^* \), there is a unique Lagrangian minimizer \( \phi^*(\lambda_u^*) = \phi^* \) which is, by definition, feasible (Bertsekas, 1997). The only difference between problems \( (P_p) \) and \( (P_u) \) is the set over which the optimization is carried out. Thus, if the parametrization \( \Theta \) is rich enough (e.g., deep neural networks), the set \( F_\theta \) is essentially the same as \( F \), and we should expect the properties of the solutions to problems \( (D_p) \) and \( (D_u) \) to be similar. This insight leads us to the \( \nu \)-near universality of the parametrization assumption. **Assumption 3.3.** For all \( \phi \in F \), there exists \( \theta \in \Theta \) such that \( \| \phi - f_\theta \|_{L_2} \leq \nu \). The constant \( \nu \) in Assumption 3.3 is a measure of how well \( F_\theta \) covers \( F \). Consider, for instance, that \( F \) is the set of continuous functions and \( F_\theta \) the set of functions implementable with a two-layer neural network with sigmoid activations and \( K \) hidden neurons. If the parametrization has 10 neurons in the hidden layer, it is considerably worse at representing elements in \( F \) than one with 1000 neurons. While determining the exact value of \( \nu \) is in general not straightforward, any \( \nu > 0 \) can be achieved for a large enough number of neurons (Hornik, 1991). The same holds for the number of kernels and an RKHS (Berlinet & Thomas-Agnan, 2011). Given these facts, it is legitimate to ask: how close are the elements of \( F_\theta(\lambda_p^*) \) to \( \phi^* \) in terms of their optimality and constraint satisfaction? Bounding these errors would theoretically justify the use of last primal iterates, doing away with the need for randomization. ### 3.1 Near-optimality and Near-feasibility of Dual Learning A key challenge of using duality to undertake \( (P_p) \) is that the value \( D_p^* \) of the dual problem \( (D_p) \) need not be a good approximation of the value \( P_p^* \) of \( (P_p) \) (i.e., lack of strong duality). This was tackled in (Chamon et al., 2023, Prop. 3.3). Explicitly, under Assumptions 3.1-3.3, the duality gap of problem \( (P_p) \) is bounded as in \[ P_p^* - D_p^* \leq M\nu(1 + \| \tilde{\lambda}^* \|_1) := \Gamma_1, \] where \( \tilde{\lambda}^* \) maximizes \( \tilde{g}_p(\lambda) = g_p(\lambda) + M\nu\|\lambda\|_1 \). This result, however, only shows that the dual problem can be used to approximate the value of the constrained problem \( (P_p) \). It says nothing about whether it can provide a (near-)feasible solution, which is the main issue addressed in this paper. We next characterize the sub-optimality and constraint violation of the Lagrangian minimizers \( f_\theta(\lambda_p^*) \in F_\theta(\lambda_p^*) \) by comparing these primal variables with the solution of the unparametrized problem \( \phi^* \). The curvature of the unparametrized dual function \( g_u(\lambda_u) \) around its optimum is central to this analysis. We will first provide a result with the following assumption on this curvature and then describe its connection to the properties of \((P_p)\). Let \(H_\lambda := \{\gamma \lambda_u^* + (1 - \gamma) \lambda_p^* : \gamma \in [0, 1]\}\) denote the segment connecting \(\lambda_u^*\) and \(\lambda_p^*\). **Assumption 3.4.** The dual function \(g_u\) is \(\mu_g\)-strongly concave and \(\beta_g\)-smooth along \(H_\lambda\). The following proposition characterizes the constraint violation for all \(f_\theta(\lambda_p^*) \in F_\theta^*(\lambda_p^*)\) with respect to \(\phi^*\); the optimal, feasible solution of the unparametrized problem. **Proposition 3.1.** Under Assumptions 3.1-3.4, any \(f_\theta(\lambda_p^*) \in F_\theta^*(\lambda_p^*)\), approximates the constraint value of the solution \(\phi^*\) of \((P_u)\) as in: \[ \|\ell(f_\theta(\lambda_p^*)) - \ell(\phi^*)\|_2^2 \leq 2\beta_g M\nu(1 + \|\lambda_p^*\|_1)\left(1 + \sqrt{\frac{\beta_g}{\mu_g}}\right)^2. \] Since \((P_u)\) is feasible, \(\ell(\phi^*)\) is non-positive. Hence, the approximation bound in Proposition 3.1 is stronger than an infeasibility bound on \(f_\theta(\lambda_p^*)\). Indeed, it says not that \(\ell(f_\theta(\lambda_p^*)) \leq 0\), but that it approximates the constraint values of the optimal solution \(\phi^*\). The ratio \(\beta_g/\mu_g\) (i.e. the condition number of \(g_u\)), which determines optimal step sizes in dual ascent methods (Polyak, 1987), also plays a key role here, representing the tension between two fundamental forces driving this bound. On the one hand, the sensitivity of the dual problems, controlled by \(\mu_g\), which determines how different \(\lambda_u^*\) and \(\lambda_p^*\) are. On the other hand, the sensitivity of the primal problems, linked to the smoothness constant \(\beta_g\), which determines the effect of this difference on feasibility. Nevertheless, Proposition 3.1 remains abstract. To connect it to the properties of \((P_p)\), we rely on the following assumptions to obtain bounds on \(\mu_g\) and \(\beta_g\). **Assumption 3.5.** The functionals \(\ell_i\), \(i = 0, \ldots, m\) are \(\beta\)-smooth on \(F\). **Assumption 3.6.** The Jacobian \(D_\phi \ell(\phi^*)\) is full-row rank at the optimum, i.e., there exists \(\sigma > 0\) such that \(\inf_{\|\lambda\|_2 = 1} \|\lambda^T D_\phi \ell(\phi^*)\|_{L_2} \geq \sigma\), where \(D_\phi \ell(\phi^*)\) denotes the Fréchet derivative of the functional \(\ell\) at \(\phi^*\) (see definition in Appendix A.1). Assumption 3.6 is unlike the previous regularity assumptions over which a practitioner has full control and is not straightforward to satisfy at first sight. It is, however, a typical assumption used to derive duality results in convex optimization known as linear independence constraint qualification or LICQ (Bertsekas, 1997). As such, it could be replaced by a different constraint qualification, such as a stricter version of Assumption 3.2. This is, however, left for future work. Under these assumptions, we can describe the curvature of \(g_u\) in terms of the problem parameters as follows. **Lemma 3.1.** Under assumptions 3.1, 3.2, 3.5 and 3.6, \(g_u(\lambda_u)\) is \(\mu_g\)-strongly concave and \(\beta_g\)-smooth on \(H_\lambda\) for \(\mu_g = \frac{\mu_0 \sigma^2}{\beta^2(1+\Delta)^2}\) and \(\beta_g = \frac{\sqrt{mM^2}}{\mu_0}\), where \(\Delta = \max(\|\lambda_u^*\|_1, \|\lambda_p^*\|_1)\). Having characterized the curvature of the unparametrized dual function \(g_u\), we can now state the main result of this section, which puts together Proposition 3.1, Lemma 3.1, and the near-optimality result from (Chamon et al., 2023) in (4) to bound the near-optimality and near-feasibility of Lagrangian minimizers associated to optimal dual variables. **Theorem 3.1.** Under assumptions 3.1, 3.2, 3.3, 3.5 and 3.6, the sub-optimality and infeasibility of any \(f_\theta(\lambda_p^*) \in F(\lambda_p^*)\) is bounded by: \[ \|\ell(f_\theta(\lambda_p^*)) - \ell(\phi^*)\|_\infty \leq \Gamma_2 := M \left[1 + \kappa_1 \kappa_0 (1 + \Delta)\right] \sqrt{2m \frac{M\nu}{\mu_0}(1 + \|\lambda_p^*\|_1)} \tag{5} \] \[ |P_p^* - \ell_0(f_\theta(\lambda_p^*))| \leq (1 + \|\lambda_p^*\|_1) M\nu + \Gamma_1 + \|\lambda_p^*\|_1 \Gamma_2 \tag{6} \] with \(\kappa_1 = \frac{M}{\sigma}, \kappa_0 = \frac{\beta}{\mu_0}, \Delta = \max(\|\lambda_u^*\|_1, \|\lambda_p^*\|_1)\) and \(\Gamma_1\) as in (4). Theorem 3.1 shows that the dual problem \((D_p)\) not only approximates the value \(P_p^*\) of \((P_p)\), but also provides approximate solutions for it. The quality of these approximations depends on three factors. First, the sensitivity of the learning problem, as captured by the Lipschitz constant \(M\) and the constants \(\kappa_1\) and \(\kappa_0\), that correspond to the condition numbers of the constraint Jacobian and the objective function respectively. Overall, these quantities measure how well-conditioned the problem is. Second, the requirements difficulty. Indeed, the optimal dual variables can be seen as measures of the sensitivity of the objective value with respect to constraint perturbations (see, e.g., Boyd & Vandenberghe, 2004). Hence, the more stringent the constraints, the larger \( \|\lambda^*_u\|_1 \) and/or \( \|\lambda^*_p\|_1 \). Finally, the approximation error depends on the factor \( \nu \) that denotes the richness of the parametrization, i.e., how good it is at approximating functions in \( F \) (Assumption 3.3). In fact, Theorem 3.1 shows that as the model capacity increases (\( \nu \) decreases), the approximation bounds (5)–(6) improve. This behavior is not trivial. Indeed, while we expect that richer parametrizations lead to lower approximation errors, Theorem 3.1 states that they also make solving the optimization problem \( (P_p) \) easier, since dual solutions then provide better approximations of primal solutions. Observe that the effect of these factors on feasibility in (5) are similar to those on optimality in (6) and, e.g., (Chamon et al., 2023). Next, we leverage these results to provide convergence guarantees for Algorithm 1. But first, we outline the main ideas behind the proof of Proposition 3.1. ### 3.2 Proof Sketch In this section, we provide a brief outline of the proof of Theorem 3.1. We begin by decomposing the distance between constraint violations as \[ \|\ell(f_\theta(\lambda^*_p)) - \ell(\phi^*)\|_2 = \|\ell(f_\theta(\lambda^*_p)) - \ell(\phi(\lambda^*_p)) + \ell(\phi(\lambda^*_p)) - \ell(\phi^*)\|_2 \\ \leq \|\ell(f_\theta(\lambda^*_p)) - \ell(\phi(\lambda^*_p))\|_2 + \|\ell(\phi(\lambda^*_p)) - \ell(\phi(\lambda^*_u))\|_2 \] The first term captures the effect of parametrizing the hypothesis class for a fixed dual variable. In contrast, the second term characterizes the effect of changing the dual variables on the unparametrized Lagrangian minimizer. This is made clear in (7) by using the fact that \( \phi^* = \phi(\lambda^*_u) \) (see discussion in Section 3). In the sequel, we analyze each of these terms separately. For conciseness, all technical definitions from this section are deferred to Appendix A.1. #### 3.2.1 Dual Variable Perturbation We begin by analyzing the second term in (7). Recall from the beginning of Section 3 that under Assumption 3.1–3.2, it holds that \( \nabla_{\lambda} g_u(\lambda) = \ell(\phi(\lambda)) \). Hence, \( \|\ell(\phi(\lambda^*_p)) - \ell(\phi(\lambda^*_u))\|_2 = \|\nabla_{\lambda} g_u(\lambda^*_p) - \nabla_{\lambda} g_u(\lambda^*_u)\|_2 \). Using the \( \beta_g \)-smoothness of \( g_u \), this gradient difference can be bounded using \( \|\lambda^*_p - \lambda^*_u\|_2 \). The latter can in turn be bounded by combining the \( \nu \)-universality of the parametrization (Assumption 3.3) and convex optimization perturbation results to obtain: **Proposition 3.2.** Under assumptions 3.1-3.4, the distance between the constraint violations of \( \phi(\lambda^*_p) \) and \( \phi(\lambda^*_u) \) is bounded by: \[ \|\ell(\phi(\lambda^*_p)) - \ell(\phi(\lambda^*_u))\|_2^2 \leq 2\frac{\beta_g^2}{\mu_g} M\nu(1 + \|\lambda^*_p\|_1) \] #### 3.2.2 Hypothesis Class Perturbation Bounding the first term in (7) is less straightforward. To do so, we rely on the perturbation function of the unparametrized problem \( (P_u) \), defined as \[ P_u^*(\epsilon) = \min_{\phi \in F} \ell_0(\phi) \\ \text{s.to } \ell(\phi) + \epsilon \preceq 0, \] for some perturbation \( \epsilon \in \mathbb{R}^m \). Intuitively, \( P_u^*(\epsilon) \) quantifies the impact on the objective value of modifying the constraint specifications by \( \epsilon \). Note that the unparametrized problem \( (P_u) \) is recovered for \( \epsilon = 0 \). Motivated by the fact that we can get a strong handle on the sensitivity of the perturbation function \( (P_e) \), we seek to bound \( \|\ell(f_\theta(\lambda^*_p)) - \ell(\phi(\lambda^*_p))\|_2 \) by instead analyzing \( |P_u^*(\epsilon_p) - P_u^*(\epsilon_u)| \) for \( \epsilon_p = -\ell(f_\theta(\lambda^*_p)) \) and \( \epsilon_u = -\ell(\phi(\lambda^*_p)) \). Indeed, it holds for every \( \lambda \in \mathbb{R}_+^m \) that \( P^\dagger(\lambda) = -g_u(\lambda) \), where \( \dagger \) denotes the Fenchel conjugate (see Appendix A.4). We can therefore relate the curvature of \( g_u \) to that \( P_u^*(\epsilon) \) (Kakade et al., 2009) to obtain: **Proposition 3.3.** Under assumptions 3.1-3.4, the distance between constraint violations associated to the parametrization of the hypothesis class is given by: \[ \|\ell(\phi(\lambda^*_p)) - \ell(f_\theta(\lambda^*_p))\|_2^2 \leq 2\beta_g M\nu(1 + \|\lambda^*_p\|_1) \] Using Propositions 3.2–3.3 in (7) yields Proposition 3.1. Theorem 3.1 is then obtained by further leveraging Lemma 3.1 and the bound on the duality gap $P_p^* - D_p^*$ in (4) (see Appendix A.13.2). 4 BEST ITERATE CONVERGENCE In this section, we leverage the connection between the parameterized [cf. $(P_p)$ and $(D_p)$] and unparameterized [cf. $(P_u)$ and $(D_u)$] problems to analyze the convergence of Algorithm 1. Seeking a more general result, we relax Steps 4 and 5 to allow for approximate Lagrangian minimization and the use of stochastic supergradients of the dual function respectively. Explicitly, we assume that for all $t$, the oracle in Step 4 returns a function $f_\theta^o(t)$ such that $$L(f_\theta^o(t), \lambda_p) \leq \min_{\theta \in \Theta} L(f_\theta, \lambda(t)) + \rho,$$ for an approximation error $\rho \geq 0$. In contrast to Step 4, equation 9 accounts for potential numerical and approximation errors in the computation of the Lagrangian minimizer. The existence of such a $\rho$-approximate oracle is a typical assumption in the analysis of dual algorithms (Cotter et al., 2019; Chamon et al., 2023; Kearns et al., 2018) and is often justified by substantial theoretical and empirical evidence that many ML optimization problems can be efficiently solved despite non-convexity. That is the case, e.g., for deep neural networks (Zhang et al., 2021; Brutzkus & Globerson, 2017; Soltanolkotabi et al., 2018; Ge et al., 2017). Additionally, we consider that the dual variable update in Step 5 is replaced by $$\lambda_i(t + 1) = \max \left[0, \lambda_i(t) + \eta \hat{\ell}_i(f_\theta^o(t))\right],$$ where $\hat{\ell}_i(f_\theta^o(t))$ are conditionally unbiased estimates of the statistical risks $\ell_i(f_\theta^o(t))$, i.e., $\mathbb{E}[\hat{\ell}_i(f_\theta^o(t)) | \lambda(t)] = \ell_i(f_\theta^o(t))$. This stochastic update accounts for, e.g., the use of independent sample batches to estimate the constraint slacks $\ell_i(f_\theta^o)$. The following Lemma establishes the convergence of the best iterate of (9)–(10), i.e., of the dual variables $\lambda_i(t)$ that yield the largest dual function for all $t$. **Lemma 4.1.** Let $g_p^{\text{best}}(t|\lambda(t_0)) = \max_{s \in [t_0,t]} g_p(\lambda(s))$ be the maximum value of the parametrized dual function up to time $t$. Then, for all $t_0 > 1$, it holds that $$\lim_{t \to \infty} g_p^{\text{best}}(t|\lambda(t_0)) \geq D_p^* - \left(\frac{\eta S^2}{2} + \rho\right) \quad \text{a.s.},$$ where $S^2 > \sum_{i=1}^{m} \mathbb{E}\left[\hat{\ell}_i(f_\theta^o(t))^2 | \lambda(t)\right]$. The existence of a finite $S^2$ is implied by the assumption that $F_\theta \subseteq F \subset L^2$. Lemma 4.1 implies that for any $\delta > 0$, there exists a finite $t^*$ such that $\lambda(t^*)$ achieves the value $D_p^* - \left(\frac{\eta S^2}{2} + \rho + \delta\right)$. We denote this iterate $\lambda^{\text{best}}$. Note that the step size $\eta$ can be reduced so as to make $g_p^{\text{best}}$ arbitrarily close to $D_p^* - \rho$ (asymptotically). In view of the bound on the duality gap $P_p^* - D_p^*$ in (4), Lemma 4.1 implies that $\lambda^{\text{best}}$ is near-optimal. Combine with the near-feasibility results from Section 3, we can also bound the constraint violations of the Lagrangian minimizer associated with $\lambda^{\text{best}}$. **Proposition 4.1.** Let $\lambda^{\text{best}}$ be any dual iterate that achieves $g_p(\lambda^{\text{best}}) \geq D_p^* - (\eta S^2/2 + \rho)$. Suppose there exists $\phi \in F$ such that $\ell(\phi) \prec \min\{0, \ell(\phi(\lambda^{\text{best}})), \ell(f_\theta(\lambda^{\text{best}}))\}$ and that Assumptions 3.1, 3.3, 3.5, and 3.6 hold. Then, $$\|\ell(\phi^*) - \ell(f_\theta(\lambda^{\text{best}}))\|_2^2 \leq 2\beta_g \left(M\nu(1 + \|\lambda^{\text{best}}\|_1) + \frac{\eta S^2}{2} + \rho\right) \left(1 + \sqrt{\frac{\beta_g}{\mu_g}}\right)^2,$$ where $\tilde{\mu}_g = \frac{\mu_g \sigma^2}{\beta^2(1 + \max\{\|\lambda^*_p\|_1, \|\lambda^{\text{best}}\|_1\})^2}$. Reasonably, the bound in Proposition 4.1 is governed by the same terms as Theorem 3.1. Here, however, the bound is the loosened by the sub-optimality of $\lambda^{\text{best}}$ with respect to $\lambda^*_p$. Figure 2: **Left:** the Unconstrained model performs better in terms of average test accuracy than both the Last and Randomized model. **Middle:** Both constrained models do better in terms of Counterfactual Fairness. The key point is that the Last iterate is never far from the Randomized one in terms of constraint violation. **Right:** As the richness of the parametrization increases the maximum constraint violation (i.e.: size of the oscillations) decreases. 5 EXPERIMENTAL VALIDATION To illustrate the theoretical results from Sections 3 and 4, we return to the counterfactually fair learning problem from Example 2.1. We work with the COMPAS dataset, where the task is to predict recidivism while remaining insensitive to the protected variables gender and race, which can take the values [“Male”, “Female”] and [“African American”, “Hispanic”, “Caucasian”, “Other”] respectively. We take the parametrized model $f_\theta$ to be a 2-layer NN with sigmoid activations, so that the resulting constrained learning problem is non-convex. Further experimental details are provided in Appendix A.16. We compare the accuracy and constraint satisfaction of three models: an unconstrained predictor, trained without any additional constraints; a last iterate predictor, corresponding to the final iterate $f_\theta(T)$ of an empirical version of Algorithm 1; and a randomized predictor that samples a model uniformly at random from the sequence of primal iterates $\{f_\theta(t)\}_{t=t_0}^T$ for each prediction. As shown in Fig. 2 (**Left**), the unconstrained model is slightly better than the two constrained ones in terms of predictive accuracy. This advantage comes at the cost of less counterfactually fair predictions, i.e., a model more sensitive to the protected features (Fig. 2, **Middle**). The key point of this experiment, however, is that the last iterate and randomized predictors provide similar accuracy and constraint satisfaction, as predicted by Theorem 3.1. Additionally, Fig. 2 (**Right**) showcases the impact of the parametrization richness on the constraint violation of last primal iterates. We control this richness by means of projecting the data onto a lower dimensional space using a fixed, random linear map. Note that, as Theorem 3.1 indicates, the constraint violation decreases by up to an order of magnitude as we increase the capacity of the model. As we have observed before, this behavior is not straightforward: though richer parametrizations are expected to lead to lower approximation errors, it is not immediate that it should make the optimization problem $(P_p)$ easier to solve. 6 CONCLUSION We analyzed primal iterates obtained from a dual ascent method when solving the Lagrangian dual of a primal non-convex constrained learning problem. The primal problem in question is the parametrized version of a convex functional program, which is amenable to a Lagrangian formulation. Specifically, we characterized how far these predictors are from the solution of the unparametrized problem in terms of their optimality and constraint violation. This result led to a characterization of the infeasibility of best primal iterates and elucidated the role of the capacity of the model and the curvature of the objective. These guarantees bridge a gap between theory and practice in constrained learning, shedding light on when and why randomization is unnecessary. The findings presented in this work can be extended in several ways. For instance, a study of the estimation error incurred by minimizing the empirical Lagrangian in Algorithm 1 could be added. It might also be possible to characterize the curvature of the dual function by alternative means, which could potentially lift assumptions on the unparametrized problem. 6.1 ACKNOWLEDGMENTS The work of A. Ribeiro and J. Elenter is supported by NSF-Simons MoDL, Award 2031985, NSF AI Institutes program. The work of L.F.O. Chamon is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy (EXC 2075-390740016). REFERENCES Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. A reductions approach to fair classification. In International conference on machine learning, pp. 60–69. PMLR, 2018. Tamer Başar and Pierre Bernhard. H-infinity optimal control and related minimax design problems: a dynamic game approach. Springer Science & Business Media, 2008. Alain Berlinet and Christine Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media, 2011. Dimitri P Bertsekas. Nonlinear programming. Journal of the Operational Research Society, 48(3): 334–334, 1997. J. Frédéric Bonnans and Alexander Shapiro. Optimization problems with perturbations: A guided tour. SIAM Review, 40(2):228–264, 1998. ISSN 00361445. URL http://www.jstor.org/stable/2653333. Stephen P Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. Alon Brutzkus and Amir Globerson. Globally optimal gradient descent for a convnet with gaussian inputs. In International conference on machine learning, pp. 605–614. PMLR, 2017. Luiz Chamon and Alejandro Ribeiro. Probably approximately correct constrained learning. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 16722–16735. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/c291b01517f3e6797c774c306591cc32-Paper.pdf. Luiz F. O. Chamon, Santiago Paternain, Miguel Calvo-Fullana, and Alejandro Ribeiro. Constrained learning with non-convex losses. IEEE Transactions on Information Theory, 69(3):1739–1760, 2023. doi: 10.1109/TIT.2022.3187948. Andrew Cotter, Maya R. Gupta, Heinrich Jiang, Nathan Srebro, Karthik Sridharan, Serena Lutong Wang, Blake E. Woodworth, and Seungil You. Training well-generalizing classifiers for fairness metrics and other data-dependent constraints. In International Conference on Machine Learning, 2018. URL https://api.semanticscholar.org/CorpusID:49556538. Andrew Cotter, Heinrich Jiang, Taman Narayan, Seungil You, and Karthik Sridharan. Optimization with non-differentiable constraints with applications to fairness, recall, churn, and other goals. J. Mach. Learn. Res., 20(172):1–59, 2019. Constantinos Daskalakis, Stratis Skoulakis, and Manolis Zampetakis. The complexity of constrained min-max optimization. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pp. 1466–1478, 2021. Juan Elenter, Navid NaderiAlizadeh, and Alejandro Ribeiro. A lagrangian duality approach to active learning. Advances in Neural Information Processing Systems, 35:37575–37589, 2022. Matthew M. Engelhard, Ricardo Henao, Samuel I. Berchuck, Junya Chen, Brian Eichner, Darby Herkert, Scott H. Kollins, Andrew Olson, Eliana M. Perrin, Ursula Rogers, Connor Sullivan, YiQin Zhu, Guillermo Sapiro, and Geraldine Dawson. Predictive Value of Early Autism Detection Models Based on Electronic Health Record Data Collected Before Age 1 Year. JAMA Network Open, 6(2):e2254303–e2254303, 02 2023. ISSN 2574-3805. doi: 10.1001/jamanetworkopen.2022.54303. URL https://doi.org/10.1001/jamanetworkopen.2022.54303.
rAHcTCMaLc
I feel there are several modifications that make the proposed method alleviating from the concept of SVGD, e.g. the truncation of the particle update. Furthermore, as there will be some discretization error from the gradient flow of KL on Wasserstein space (which is the motivation of SVGD), I'm thinking if the entropy of the discretized SVGD is a proper estimate of the KL on energy-based policy.
S²AC: ENERGY-BASED REINFORCEMENT LEARNING WITH STEIN SOFT ACTOR CRITIC Safa Messaoud¹†, Billel Mokeddem¹*, Zhenghai Xue²*, Linsey Pang³, Bo An¹,², Haipeng Chen³†, Sanjay Chawla¹† ¹Qatar Computing Research Institute, Hamad Bin Khalifa University, ²School of Computer Science and Engineering, Nanyang Technological University, ³SalesForce, ⁴Skywork AI, ⁵Data Science, William & Mary {smessaoud,bmokeddem,schawla}@hbku.edu.qa, zhenghai001@e.ntu.edu.sg panglinsey@gmail.com, boan@ntu.edu.sg, hchen23@wm.edu * Equal contribution † Corresponding authors ABSTRACT Learning expressive stochastic policies instead of deterministic ones has been proposed to achieve better stability, sample complexity, and robustness. Notably, in Maximum Entropy Reinforcement Learning (MaxEnt RL), the policy is modeled as an expressive Energy-Based Model (EBM) over the Q-values. However, this formulation requires the estimation of the entropy of such EBMs, which is an open problem. To address this, previous MaxEnt RL methods either implicitly estimate the entropy, resulting in high computational complexity and variance (SQL), or follow a variational inference procedure that fits simplified actor distributions (e.g., Gaussian) for tractability (SAC). We propose Stein Soft Actor-Critic (S²AC), a MaxEnt RL algorithm that learns expressive policies without compromising efficiency. Specifically, S²AC uses parameterized Stein Variational Gradient Descent (SVGD) as the underlying policy. We derive a closed-form expression of the entropy of such policies. Our formula is computationally efficient and only depends on first-order derivatives and vector products. Empirical results show that S²AC yields more optimal solutions to the MaxEnt objective than SQL and SAC in the multi-goal environment, and outperforms SAC and SQL on the MuJoCo benchmark. Our code is available at: https://github.com/SafaMessaoud/S2AC-Energy-Based-RL-with-Stein-Soft-Actor-Critic 1 INTRODUCTION MaxEnt RL (Todorov [2006], Ziebart [2010], Haarnoja et al. [2017], Kappen [2005], Toussaint [2009], Theodorou et al. [2010], Abdolmaleki et al. [2018], Haarnoja et al. [2018a], Vieillard et al. [2020]) has been proposed to address challenges hampering the deployment of RL to real-world applications, including stability, sample efficiency (Gu et al. [2017]), and robustness (Eysenbach & Levine [2022]). Instead of learning a deterministic policy, as in classical RL (Sutton et al. [1999], Schulman et al. [2017], Silver et al. [2014], Lillicrap et al. [2015]), MaxEnt RL learns a stochastic policy that captures the intricacies of the action space. This enables better exploration during training and eventually better robustness to environmental perturbations at test time, i.e., the agent learns multimodal action space distributions which enables picking the next best action in case a perturbation prevents the execution of the optimal one. To achieve this, MaxEnt RL models the policy using the expressive family of EBMs (LeCun et al. [2006]). This translates into learning policies that maximize the sum of expected future reward and expected future entropy. However, estimating the entropy of such complex distributions remains an open problem. To address this, existing approaches either use tricks to go around the entropy computation or make limiting assumptions on the policy. This results in either poor scalability or convergence to suboptimal solutions. For example, SQL (Haarnoja et al. [2017]) implicitly incorporates entropy in the Q-function computation. This requires using importance sampling, which results in high variability and hence poor training stability and limited scalability to high dimensional action spaces. SAC (Haarnoja Figure 2: $S^2$AC learns a more optimal solution to the MaxEnt RL objective than SAC and SQL. We design a multigoal environment where an agent starts from the center of the 2-d map and tries to reach one of the three goals ($G_1$, $G_2$, and $G_3$). The maximum expected future reward (level curves) is the same for all the goals but the expected future entropy is different (higher on the path to $G_2/G_3$): the action distribution $\pi(a|s)$ is bi-modal on the path to the left ($G_2$ and $G_3$) and unimodal to the right ($G_1$). Hence, we expect the optimal policy for the MaxEnt RL objective to assign more weights to $G_2$ and $G_3$. We visualize trajectories (in blue) sampled from the policies learned using SAC, SQL, and $S^2$AC. SAC quickly commits to a single mode due to its actor being tied to a Gaussian policy. Though SQL also recovers the three modes, the trajectories are evenly distributed. $S^2$AC recovers all the modes and approaches the left two goals more frequently. This indicates that it successfully maximizes not only the expected future reward but also the expected future entropy. et al., 2018a), on the other hand, follows a variational inference procedure by fitting a Gaussian distribution to the EBM policy. This enables a closed-form evaluation of the entropy but results in a suboptimal solution. For instance, SAC fails in environments characterized by multimodal action distributions. Similar to SAC, IAPO (Marno et al., 2021) models the policy as a uni-modal Gaussian. Instead of optimizing a MaxEnt objective, it achieves multimodal policies by learning a collection of parameter estimates (mean, variance) through different initializations for different policies. To improve the expressiveness of SAC, SSPG (Cetin & Celiktutan, 2022) and SAC-NF (Mazoure et al., 2020) model the policy as a Markov chain with Gaussian transition probabilities and as a normalizing flow (Rezende & Mohamed, 2015), respectively. However, due to training stability issues, the reported results in Cetin & Celiktutan (2022) show that though both models learn multi-modal policies, they fail to maximize the expected future entropy in positive rewards setups. We propose a new algorithm, $S^2$AC, that yields a more optimal solution to the MaxEnt RL objective. To achieve expressivity, $S^2$AC models the policy as a Stein Variational Gradient Descent (SVGD) (Liu, 2017) sampler from an EBM over Q-values (target distribution). SVGD proceeds by first sampling a set of particles from an initial distribution, and then iteratively transforming these particles via a sequence of updates to fit the target distribution. To compute a closed-form estimate of the entropy of such policies, we use the change-of-variable formula for pdfs (Devore et al., 2012). We prove that this is only possible due to the invertibility of the SVGD update rule, which does not necessarily hold for other popular samplers (e.g., Langevin Dynamics (Welling & Teh, 2011)). While normalizing flow models (Rezende & Mohamed, 2015) are also invertible, SVGD-based policy is more expressive as it encodes the inductive bias about the unnormalized density and incorporates a dispersion term to encourage multi-modality, whereas normalizing flows encode a restrictive class of invertible transformations (with easy-to-estimate Jacobian determinants). Moreover, our formula is computationally efficient and only requires evaluating first-order derivatives and vector products. To improve scalability, we model the initial distribution of the SVGD sampler as an isotropic Gaussian and learn its parameters, i.e., mean and standard deviation, end-to-end. We show that this results in faster convergence to the target distribution, i.e., fewer SVGD steps. Intuitively, the initial distribution learns to contour the high-density region of the target distribution while the SVGD updates result in better and faster convergence to the modes within that region. Hence, our approach is as parameter efficient as SAC, since the SVGD updates do not introduce additional trainable parameters. Note that $S^2$AC can be reduced to SAC when the number of SVGD steps is zero. Also, SQL becomes equivalent to $S^2$AC if the entropy is computed explicitly using our formula (the policy in SQL is an amortized SVGD sampler). Beyond RL, the backbone of $S^2$AC is a new variational inference algorithm with a more expressive and scalable distribution characterized by a closed-form entropy estimate. We believe that this variational distribution can have a wider range of exciting applications. We conduct extensive empirical evaluations of $S^2$AC from three aspects. We start with a sanity check on the merit of our derived SVGD-based entropy estimate on target distributions with known entropy values (e.g., Gaussian) or log-likelihoods (e.g., Gaussian Mixture Models) and assess its sensitivity to different SVGD parameters (kernel, initial distribution, number of steps and number of particles). We observe that its performance depends on the choice of the kernel and is robust to variations of the remaining parameters. In particular, we find out that the kernel should be chosen to guarantee inter-dependencies between the particles, which turns out to be essential for invertibility. Next, we assess the performance of $S^2$AC on a multi-goal environment (Haarnoja et al., 2017) where different goals are associated with the same positive (maximum) expected future reward but different (maximum) expected future entropy. We show that $S^2$AC learns multimodal policies and effectively maximizes the entropy, leading to better robustness to obstacles placed at test time. Finally, we test $S^2$AC on the MuJoCo benchmark (Duan et al., 2016). $S^2$AC yields better performances than the baselines on four out of the five environments. Moreover, $S^2$AC shows higher sample efficiency as it tends to converge with fewer training steps. These results were obtained from running SVGD for only three steps, which results in a small overhead compared to SAC during training. Furthermore, to maximize the run-time efficiency during testing, we train an amortized SVGD version of the policy to mimic the SVGD-based policy. Hence, this reduces inference to a forward pass through the policy network without compromising the performance. 2 PRELIMINARIES 2.1 SAMPLERS FOR ENERGY-BASED MODELS In this work, we study three representative methods for sampling from EBMs: (1) Stochastic Gradient Langevin Dynamics (SGLD) & Deterministic Langevin Dynamics (DL) (Welling & Teh, 2011), (2) Hamiltonian Monte Carlo (HMC) (Neal et al., 2011), and (3) Stein Variational Gradient Descent (SVGD) (Liu & Wang, 2016). We review SVGD here since it is the sampler we eventually use in $S^2$AC, and leave the rest to Appendix C.1. SVGD is a particle-based Bayesian inference algorithm. Compared to SGLD and HMC which have a single particle in their dynamics, SVGD operates on a set of particles. Specifically, SVGD samples a set of $m$ particles $\{a_j\}_{j=1}^m$ from an initial distribution $q^0$ which it then transforms through a sequence of updates to fit the target distribution. Formally, at every iteration $l$, SVGD applies a form of functional gradient descent $\Delta f$ that minimizes the KL-divergence between the target distribution $p$ and the proposal distribution $q^l$ induced by the particles, i.e., the update rule for the $i^{th}$ particles is: $$a_i^{l+1} = a_i^l + \epsilon \Delta f(a_i^l)$$ with $$\Delta f(a_i^l) = \mathbb{E}_{a_j^l \sim q^l} [k(a_i^l, a_j^l) \nabla_{a_j^l} \log p(a_j^l) + \nabla_{a_j^l} k(a_i^l, a_j^l)].$$ (1) Here, $\epsilon$ is the step size and $k(\cdot, \cdot)$ is the kernel function, e.g., the RBF kernel: $k(a_i, a_j) = \exp(||a_i - a_j||^2/2\sigma^2)$. The first term within the gradient drives the particles toward the high probability regions of $p$, while the second term serves as a repulsive force to encourage dispersion. 2.2 MAXIMUM-ENTROPY RL We consider an infinite horizon Markov Decision Process (MDP) defined by a tuple $(S, A, p, r)$, where $S$ is the state space, $A$ is the action space and $p : S \times A \times S \rightarrow [0, \infty]$ is the state transition probability modeling the density of the next state $s_{t+1} \in S$ given the current state $s_t \in S$ and action $a_t \in A$. Additionally, we assume that the environment emits a bounded reward function $r \in [r_{min}, r_{max}]$ at every iteration. We use $\rho_\pi(s_t)$ and $\rho_\pi(s_t, a_t)$ to denote the state and state-action marginals of the trajectory distribution induced by a policy $\pi(a_t|s_t)$. We consider the setup of continuous action spaces (Lazaric et al., 2007; Lee et al., 2018; Zhou & Lu, 2023). MaxEnt RL (Todorov, 2006; Ziebart, 2010; Rawlik et al., 2012) learns a policy $\pi^*(a_t|s_t)$, that instead of maximizing the expected future reward, maximizes the sum of the expected future reward and entropy: $$\pi^* = \arg\max_\pi \sum_t \gamma^t \mathbb{E}_{(s_t, a_t) \sim \rho_\pi} [r(s_t, a_t) + \alpha H(\pi(\cdot|s_t))],$$ (2) where $\alpha$ is a temperature parameter controlling the stochasticity of the policy and $H(\pi(\cdot|s_t))$ is the entropy of the policy at state $s_t$. The conventional RL objective can be recovered for $\alpha = 0$. Note that the MaxEnt RL objective above is equivalent to approximating the policy, modeled as an EBM over Q-values, by a variational distribution $\pi(a_t|s_t)$ (see proof of equivalence in Appendix D), i.e., $$\pi^* = \arg\min_\pi \sum_t \mathbb{E}_{s_t \sim \rho_\pi} [D_{KL}(\pi(\cdot|s_t)||\exp(Q(s_t, \cdot)/\alpha)/Z)],$$ (3) where $D_{KL}$ is the KL-divergence and $Z$ is the normalizing constant. We now review two landmark MaxEnt RL algorithms: SAC (Haarnoja et al., 2018a) and SQL (Haarnoja et al., 2017). SAC is an actor-critic algorithm that alternates between policy evaluation, i.e., evaluating the Q-values for a policy $\pi_\theta(a_t|s_t)$: $$Q_\phi(s_t, a_t) \leftarrow r(s_t, a_t) + \gamma \mathbb{E}_{s_{t+1}, a_{t+1} \sim \rho_\pi} [Q_\phi(s_{t+1}, a_{t+1}) + \alpha H(\pi_\theta(\cdot|s_{t+1}))]$$ (4) and policy improvement, i.e., using the updated Q-values to compute a better policy: $$\pi_\theta = \arg\max_\theta \sum_t \mathbb{E}_{s_t, a_t \sim \rho_{\pi_\theta}} \left[ Q_\phi(a_t, s_t) + \alpha H(\pi_\theta(\cdot|s_t)) \right].$$ (5) SAC models $\pi_\theta$ as an isotropic Gaussian, i.e., $\pi_\theta(\cdot|s) = \mathcal{N}(\mu_\theta, \sigma_\theta I)$. While this enables computing a closed-form expression of the entropy, it incurs an over-simplification of the true action distribution, and thus cannot represent complex distributions, e.g., multimodal distributions. SQL goes around the entropy computation, by defining a soft version of the value function $V_\phi = \alpha \log \left( \int_A \exp \left( \frac{1}{\alpha} Q_\phi(s_t, a') \right) da' \right)$. This enables expressing the Q-value (Eq (4)) independently from the entropy, i.e., $Q_\phi(s_t, a_t) = r(s_t, a_t) + \gamma \mathbb{E}_{s_{t+1} \sim p}[V_\phi(s_{t+1})]$. Hence, SQL follows a soft value iteration which alternates between the updates of the “soft” versions of Q and value functions: $$Q_\phi(s_t, a_t) \leftarrow r(s_t, a_t) + \gamma \mathbb{E}_{s_{t+1} \sim p}[V_\phi(s_{t+1})], \forall (s_t, a_t)$$ (6) $$V_\phi(s_t) \leftarrow \alpha \log \left( \int_A \exp \left( \frac{1}{\alpha} Q_\phi(s_t, a') \right) da' \right), \forall s_t.$$ (7) Once the $Q_\phi$ and $V_\phi$ functions converge, SQL uses amortized SVGD [Wang & Liu (2016)] to learn a stochastic sampling network $f_\theta(\xi, s_t)$ that maps noise samples $\xi$ into the action samples from the EBM policy distribution $\pi^*(a_t|s_t) = \exp \left( \frac{1}{\alpha} (Q^*(s_t, a_t) - V^*(s_t)) \right)$. The parameters $\theta$ are obtained by minimizing the loss $J_\theta(s_t) = D_{KL}(\pi_\theta(\cdot|s_t)||\exp(\frac{1}{\alpha}(Q_\phi^*(s_t, \cdot) - V_\phi^*(s_t))))$ with respect to $\theta$. Here, $\pi_\theta$ denotes the policy induced by $f_\theta$. SVGD is designed to minimize such KL-divergence without explicitly computing $\pi_\theta$. In particular, SVGD provides the most greedy direction as a functional $\Delta f_\theta(\cdot, s_t)$ (Eq (1)) which can be used to approximate the gradient $\partial J_\theta/\partial a_t$. Hence, the gradient of the loss $J_\theta$ with respect to $\theta$ is: $\partial J_\theta(s_t)/\partial \theta \propto \mathbb{E}_\xi[\Delta f_\theta(\xi, s_t)\partial f_\theta(\xi, s_t)/\partial \theta]$. Note that the integral in Eq (7) is approximated via importance sampling, which is known to result in high variance estimates and hence poor scalability to high dimensional action spaces. Moreover, amortized generation is usually unstable and prone to mode collapse, an issue similar to GANs. Therefore, SQL is outperformed by SAC [Haarnoja et al. (2018a)] on benchmark tasks like MiuJoCo. 3 APPROACH We introduce S$^2$AC, a new actor-critic MaxEnt RL algorithm that uses SVGD as the underlying actor to generate action samples from policies represented using EBMs. This choice is motivated by the expressivity of distributions that can be fitted via SVGD. Additionally, we show that we can derive a closed-form entropy estimate of the SVGD-induced distribution, thanks to the invertibility of the update rule, which does not necessarily hold for other EBM samplers. Besides, we propose a parameterized version of SVGD to enable scalability to high-dimensional action spaces and non-smooth Q-function landscapes. S$^2$AC is hence capable of learning a more optimal solution to the MaxEnt RL objective (Eq (2)) as illustrated in Figure 2. 3.1 STEIN SOFT ACTOR CRITIC Like SAC, S$^2$AC performs soft policy iteration which alternates between policy evaluation and policy improvement. The difference is that we model the actor as a parameterized sampler from an EBM. Hence, the policy distribution corresponds to an expressive EBM as opposed to a Gaussian. Critic. The critic’s parameters $\phi$ are obtained by minimizing the Bellman loss as traditionally: $$\phi^* = \arg\min_\phi \mathbb{E}_{(s_t, a_t) \sim \rho_{\pi_\theta}} \left[ (Q_\phi(s_t, a_t) - \tilde{y})^2 \right],$$ (8) with the target $\tilde{y} = r_t(s_t, a_t) + \gamma \mathbb{E}_{(s_{t+1}, a_{t+1}) \sim \rho_\pi}[Q_\phi(s_{t+1}, a_{t+1}) + \alpha H(\pi(\cdot|s_{t+1}))].$ Here $\tilde{\phi}$ is an exponentially moving average of the value network weights [Mnih et al. (2015)]. Actor as an EBM sampler. The actor is modeled as a sampler from an EBM over the Q-values. To generate a set of valid actions, the actor first samples a set of particles $\{a^0\}$ from an initial distribution $q^0$ (e.g., Gaussian). These particles are then updated over several iterations $l \in [1, L]$, i.e., $\{a^{l+1}\} \leftarrow \{a^l\} + \epsilon h(\{a^l\}, s)$ following the sampler dynamics characterized by a transformation $h$ (e.g., for SVGD, $h = \Delta f$ in Eq (1)). If $q^0$ is tractable and $h$ is invertible, it’s possible to compute a closed-form expression of the distribution of the particles at the $l^{th}$ iteration via the change of variable formula [Devore et al. (2012)]: $q^l(a^l|s) = q^{l-1}(a^{l-1}|s) \det(I + \epsilon \nabla_a h(a^l, s))^{-1}, \forall l \in [1, L]$. In this case, the policy is represented using the particle distribution at the final step $L$ of the sampler dynamics, i.e., $\pi(a|s) = q^L(a^L|s)$ and the entropy can be estimated by averaging $\log q^L(a^L|s)$ over a set of particles (Section 3.2). We study the invertibility of popular EBM samplers in Section 3.3. Parameterized initialization. To reduce the number of steps required to converge to the target distribution (hence reducing computation cost), we further propose modeling the initial distribution as a parameterized isotropic Gaussian, i.e., \(a^0 \sim \mathcal{N}(\mu_0(s), \sigma_0(s))\). The parameterization trick is then used to express \(a^0\) as a function of \(\theta\). Intuitively, the actor would learn \(\theta\) such that the initial distribution is close to the target distribution. Hence, fewer steps are required to converge, as illustrated in Figure 3. Note that if the number of steps \(L = 0\), \(S^2\) AC is reduced to SAC. Besides, to deal with the non-smooth nature of deep Q-function landscapes which might lead to particle divergence in the sampling process, we bound the particle updates to be within a few standard deviations (\(t\)) from the mean of the learned initial distribution, i.e., \(-t\sigma_\theta \leq a^l_\theta \leq t\sigma_\theta\), \(\forall l \in [1, L]\). Eventually, the initial distribution \(q^0_\theta\) learns to contour the high-density region of the target distribution and the following updates refine it by converging to the spanned modes. Formally, the parameters \(\theta\) are computed by minimizing the expected KL-divergence between the policy \(q^L_\theta\) induced by the particles from the sampler and the EBM of the Q-values: \[ \begin{align*} \theta^* &= \arg\max_{\theta} \mathbb{E}_{s_t \sim D, a^L_\theta \sim \pi_\theta} \left[ Q_\phi(s_t, a^L_\theta) \right] + \alpha \mathbb{E}_{s_t \sim D} \left[ H(\pi_\theta(\cdot|s_t)) \right] \\ &\text{s.t. } -t\sigma_\theta \leq a^l_\theta \leq t\sigma_\theta, \quad \forall l \in [1, L]. \end{align*} \] Here, \(D\) is the replay buffer. The derivation is in Appendix E. Note that the constraint does not truncate the particles as it is not an invertible transformation which then violates the assumptions of the change of variable formula. Instead, we sample more particles than we need and select the ones that stay within the range. We call \(S^2\) AC(\(\phi, \theta\)) and \(S^2\) AC(\(\phi\)) as two versions of \(S^2\) AC with/without the parameterized initial distribution. The complete \(S^2\) AC algorithm is in Algorithm 1 of Appendix A. 3.2 A CLOSED-FORM EXPRESSION OF THE POLICY’S ENTROPY A critical challenge in MaxEnt RL is how to efficiently compute the entropy term \(H(\pi(\cdot|s_{t+1}))\) in Eq (2). We show that, if we model the policy as an iterative sampler from the EBM, under certain conditions, we can derive a closed-form estimate of the entropy at convergence. **Theorem 3.1.** Let \(F : \mathbb{R}^n \rightarrow \mathbb{R}^n\) be an invertible transformation of the form \(F(a) = a + \epsilon h(a)\). We denote by \(q^L(a^L)\) the distribution obtained from repeatedly applying \(F\) to a set of samples \(\{a^0\}\) from an initial distribution \(q^0(a^0)\) over \(L\) steps, i.e., \(a^L = F \circ F \circ \cdots \circ F(a^0)\). Under the condition \(\epsilon ||\nabla_a h(a_i)||_\infty \ll 1, \forall i \in [1, L]\), the distribution of the particles at the \(L^{th}\) step is: \[ \log q^L(a^L) \approx \log q^0(a^0) - \epsilon \sum_{l=0}^{L-1} \text{Tr}(\nabla_a h(a^l)) + O(\epsilon^2 dL). \] Here, \(d\) is the dimensionality of \(a\), i.e., \(a \in \mathbb{R}^d\) and \(O(\epsilon^2 dL)\) is the order of approximation error. **Proof Sketch:** As \(F\) is invertible, we apply the change of variable formula (Appendix C.2) on the transformation \(F \circ F \circ \cdots \circ F\) and obtain: \(\log q^L(a^L) = \log q^0(a^0) - \sum_{l=0}^{L-1} \log |\det(I + \epsilon \nabla_a h(a^l))|\). Under the assumption \(\epsilon ||\nabla_a h(a_i)||_\infty \ll 1\), we apply the corollary of Jacobi’s formula (Appendix C.3) and get Eq. (10). The detailed proof is in Appendix F. Note that the condition \(\epsilon ||\nabla_a h(a_i)||_\infty \ll 1\) can always be satisfied when we choose a sufficiently small step size \(\epsilon\), or the gradient of \(h(a)\) is small, i.e., \(h(a)\) is Lipschitz continuous with a sufficiently small constant. It follows from the theorem above, that the entropy of a policy modeled as an EBM sampler (Eq (9)) can be expressed analytically as: \[ H(\pi_\theta(\cdot|s)) = -\mathbb{E}_{a^0_\theta \sim q^0_\theta} \left[ \log q^L_\theta(a^L_\theta|s) \right] \approx -\mathbb{E}_{a^0_\theta \sim q^0_\theta} \left[ \log q^0_\theta(a^0_\theta|s) - \epsilon \sum_{l=0}^{L-1} \text{Tr}(\nabla_a h(a^l_\theta, s)) \right]. \] In the following, we drop the dependency of the action on \(\theta\) for simplicity of the notation. 3.3 INVERTIBLE POLICIES Next, we study the invertibility of three popular EBM samplers: SVGD, SGLD, and HMC as well as the efficiency of computing the trace, i.e., \(\text{Tr}(\nabla_a h(a^l, s))\) in Eq (10) for the ones that are invertible. **Proposition 3.2 (SVGD invertibility).** Given the SVGD learning rate \(\epsilon\) and RBF kernel \(k(\cdot, \cdot)\) with variance \(\sigma\), if \(\epsilon \ll \sigma\), the update rule of SVGD dynamics defined in Eq (1) is invertible. Proof Sketch: We use the explicit function theorem to show that the Jacobian $\nabla_a F(a, s)$ of the update rule $F(a, s)$ is diagonally dominated and hence invertible. This yields invertibility of $F(a, s)$. See detailed proof in Appendix G.3. **Theorem 3.3.** The closed-form estimate of $\log q^L(a^L|s)$ for the SVGD based sampler with an RBF kernel $k(\cdot, \cdot)$ is $$\log q^L(a^L|s) \approx \log q^0(a^0|s) + \frac{\epsilon}{m\sigma^2} \sum_{l=0}^{L-1} \sum_{j=1, a^l_j \neq a^l_i}^m k(a^l_j, a^l_i)((a^l_j - a^l_i)^T \nabla_{a^l_j} Q(s, a^l_j) + \frac{\alpha}{\sigma^2} \|a^l_j - a^l_i\|^2 - d\alpha).$$ Here, $(\cdot)^T$ denotes the transpose of a matrix/vector. Note that the entropy does not depend on any matrix computation, but only on vector dot products and first-order vector derivatives. The proof is in Appendix H.1. Intuitively, the derived likelihood is proportional to (1) the concavity of the curvature of the Q-landscape, captured by a weighted average of the neighboring particles’ Q-value gradients and (2) pairwise-distances between the neighboring particles ($\sim \|a^l_j - a^l_i\|^2 \cdot \exp(\|a^l_j - a^l_i\|^2)$), i.e., the larger the distance the higher is the entropy. We elaborate on the connection between this formula and non-parametric entropy estimators in Appendix B. **Proposition 3.4** (SGLD, HMC). The SGLD and HMC updates are not invertible w.r.t. $a$. Proof Sketch: SGLD is stochastic (noise term) and thus not injective. HMC is only invertible if conditioned on the velocity $v$. Detailed proofs are in Appendices G.1, G.2. From the above theoretic analysis, we can see that SGLD update is not invertible and hence is not suitable as a sampler for $S^2$AC. While the HMC update is invertible, its derived closed-form entropy involves calculating Hessian and hence computationally more expensive. Due to these considerations, we choose to use SVGD with an RBF kernel as the underlying sampler of $S^2$AC. ## 4 RESULTS We first evaluate the correctness of our proposed closed-form entropy formula. Then we present the results of different RL algorithms on multi-goal and MuJoCo environments. ### 4.1 ENTROPY EVALUATION This experiment tests the correctness of our entropy formula. We compare the estimated entropy for distributions (with known ground truth entropy or log-likelihoods) using different samplers and study the sensitivity of the formula to different samplers’ parameters. (1) Recovering the ground truth entropy. In Figure 4a, we plot samples (black dots) obtained by SVGD, SGLD, DLD and HMC at convergence to a Gaussian with ground truth entropy $H(p) = 3.41$, starting from the same initial distribution (leftmost sub-figure). We also report the entropy values computed via Eq. (1). Unlike SGLD, DLD, and HMC, SVGD recovers the ground truth entropy. This empirically supports Proposition 3.4 that SGLD, DLD, and HMC are not invertible. (2) Effect of the kernel variance. Figure 4b shows the effect of different SVGD kernel variances $\sigma$, where we use the same initial Gaussian from Figure 4a. We also visualize the particle distributions after $L$ SVGD steps for the different configurations in Figure 9 of Appendix E. We can see that when the kernel variance is too small (e.g., $\sigma = 0.1$), the invertibility is violated, and thus the estimated entropy is wrong even at convergence. On the other extreme when the kernel variance is too large (e.g., $\sigma = 100$), i.e., when the particles are too scattered initially, the particles do not converge to the target Gaussian due to noisy gradients in the first term of Eq. (1). The best configurations hence lie somewhere in between (e.g., $\sigma \in \{3, 5, 7\}$). (3) Effect of SVGD steps and particles. Figures 4c and Figure 10b (Appendix E) show the behavior of our entropy formula under different configurations of the number of SVGD steps and particles, on two settings: (i) GMM $M$ with an increasing number of components $M$, and (ii) distributions with increasing ground truth entropy values, i.e., Gaussians with increasing variances $\sigma$. Results show that our entropy consistently grows with an increasing $M$ (Figure 4c) and increasing $\sigma$ (Figure 10b), even when a small number of SVGD steps and particles is used (e.g., $L = 10, m = 10$). 4.2 Multi-goal Experiments To check if $S^2$AC learns a better solution to the max-entropy objective (Eq 2), we design a new multi-goal environment as shown in Figure 5. The agent is a 2D point mass at the origin trying to reach one of the goals (in red). Q-landscapes are depicted by level curves. Actions are bounded in $[-1, 1]$ along both axes. Critical states for the analysis are marked with blue crosses. It is built on the multi-goal environment in Haarnoja et al. (2017) with modifications such that all the goals have (i) the same maximum expected future reward (positive) but (ii) different maximum expected future entropy. This is achieved by asymmetrically placing the goals (two goals on the left side and one on the right, leading to a higher expected future entropy on the left side) while assigning the same final rewards to all the goals. The problem setup and hyperparameters are detailed in Appendix I. (1) Multi-modality. Figure 6 visualizes trajectories (blue lines) collected from 20 episodes of $S^2$AC($\phi$, $\theta$), $S^2$AC($\phi$), SAC, SQL and SAC-NF (SAC with a normalizing flow policy, Mazoure et al. (2020)) agents (rows) at test time for increasing entropy weights $\alpha$ (columns). $S^2$AC and SQL consistently cover all the modes for all $\alpha$ values, while this is only achieved by SAC and SAC-NF for large $\alpha$ values. Note that, in the case of SAC, this comes at the expense of accuracy. Although normalizing flows are expressive enough in theory, they are known to quickly collapse to local optima in practice Kobyzev et al. (2020). The dispersion term in $S^2$AC encodes an inductive bias to mitigate this issue. (2) Maximizing the expected future entropy. We also see that with increasing $\alpha$, more $S^2$AC and SAC-NF trajectories converge to the left goals ($G_2/G_3$). This shows both models learn to maximize the expected future entropy. This is not the case for SQL whose trajectory distribution remains uniform across the goals. SAC results do not show a consistent trend. This validates the hypothesis that the entropy term in SAC only helps exploration but does not lead to maximizing future entropy. The quantified distribution over reached goals is in Figure 12 of Appendix I. (3) Robustness/adaptability. To assess the robustness of the learned policies, we place an obstacle (red bar in Figure 7) on the path to $G_2$. We show the test time trajectories of 20 episodes using $S^2$AC, SAC, SQL and SAC-NF agents trained with different $\alpha$’s. We observe that, for $S^2$AC and SAC-NF, with increasing $\alpha$, more trajectories reach the goal after hitting the obstacles. This is not the case for SAC, where many trajectories hit the obstacle without reaching the goal. SQL does not manage to escape the barrier even with higher $\alpha$. Additional results on the (4) effect of parameterization of $\theta^0$, and the (5) entropy’s effect on the learned Q-landscapes are respectively reported in Figure 11 and Figure 14 of Appendix I. Figure 6: $S^2$AC and SAC-NF learn to maximize the expected future entropy (biased towards $G_2/G_3$) while SAC and SQL do not. $S^2$AC consistently recovers all modes, while SAC-NF with smaller $\alpha$’s does not, indicating its instability. Figure 7: $S^2$AC and SAC-NF are more robust to perturbations. Obstacle $O$ is placed diagonally at $[-1, 1]$. Trajectories that did and did not reach the goal after hitting $O$ are in green and red, respectively. Figure 8: (a)-(e): Performance curves on the MuJoCo benchmark (training). $S^2$AC outperforms SQL and SAC-NF on all environments and SAC on 4 out of 5 environments. (f)-(i): Comparison of Median, IQM, Mean, and Optimality Gap between $S^2$AC and baseline algorithms. (j): The probabilities of $S^2$AC outperforming baseline algorithms. ### 4.3 MuJoCo Experiments We evaluate $S^2$AC on five environments from MuJoCo (Brockman et al., 2016): Hopper-v2, Walker2d-v2, HalfCheetah-v2, Ant-v2, and Humanoid-v2. As baselines, we use (1) DDPG (Gu et al., 2017), (2) PPO (Schulman et al., 2015), (3) SQL (Haarnoja et al., 2017), (4) SAC-NF (Mazoure et al., 2020), and (5) SAC (Haarnoja et al., 2018a). Hyperparameters are in Appendix K. (1) Performance and sample efficiency. We train five different instances of each algorithm with different random seeds, with each performing 100 evaluation rollouts every 1000 environment steps. Performance results are in Figure 8(a)-(e). The solid curves correspond to the mean returns over the five trials and the shaded region represents the minimum and maximum. $S^2$AC($\phi$, $\theta$) is consistently better than SQL and SAC-NF across all the environments and has superior performance than SAC in four out of five environments. Results also show that the initial parameterization was key to ensuring the scalability ($S^2$AC($\phi$) has poor performance compared to $S^2$AC($\phi$, $\theta$)). Figure 8(f)-(j) demonstrate the statistical significance of these gains by leveraging statistics from the reliable library (Agarwal et al., 2021) which we detail in Appendix K. (2) Run-time. We report the run-time of action selection of SAC, SQL, and $S^2$AC algorithms in Table 1. $S^2$AC($\phi$, $\theta$) run-time increases linearly with the action space. To improve the scalability, we train an amortized version that we deploy at test-time, following (Haarnoja et al., 2017). Specifically, we train a feed-forward deepnet $f_\psi(s, z)$ to mimic the SVGD dynamics during testing, where $z$ is a random vector that allows mapping the same state to different particles. Note that we cannot use $f_\psi(s, z)$ during training as we need to estimate the entropy in Eq (11), which depends on the unrolled SVGD dynamics (details in Appendix K). The amortized version $S^2$AC($\phi$, $\theta$, $\psi$) has a similar run-time to SAC and SQL with a slight tradeoff in performance (Figure 8). ### 5 Related Work MaxEnt RL (Todorov, 2006; Ziebart, 2010; Rawlik et al., 2012) aims to learn a policy that gets high rewards while acting as randomly as possible. To achieve this, it maximizes the sum of expected future reward and expected future entropy. It is different from entropy regularization (Schulman et al., 2015; O’Donoghue et al., 2016; Schulman et al., 2017) which maximizes entropy at the current time step. It is also different from multi-modal RL approaches (Tang & Agrawal, 2018) which recover different modes with equal frequencies without considering their future entropy. MaxEnt RL has been broadly incorporated in various RL domains, including inverse RL (Ziebart et al., 2008; Finn et al., 2016), stochastic control (Rawlik et al., 2012; Toussaint, 2009), guided policy search (Levine & Koltun, 2013), and off-policy learning (Haarnoja et al., 2018a,b). MaxEnt RL is shown to maximize a lower bound of the robust RL objective (Eysenbach & Levine, 2022) and is hence less sensitive... to perturbations in state and reward functions. From the variational inference lens, MaxEnt RL aims to find the policy distribution that minimizes the KL-divergence to an EBM over Q-function. The desired family of variational distributions is (1) expressive enough to capture the intricacies of the Q-value landscape (e.g., multimodality) and (2) has a tractable entropy estimate. These two requirements are hard to satisfy. SAC (Haarnoja et al., 2018a) uses a Gaussian policy. Despite having a tractable entropy, it fails to capture arbitrary Q-value landscapes. SAC-GMM (Haarnoja, 2018) extends SAC by modeling the policy as a Gaussian Mixture Model, but it requires an impractical grid search over the number of components. Other extensions include IAPO (Marino et al., 2021) which also models the policy as a uni-modal Gaussian but learns a collection of parameter estimates (mean, variance) through different initializations. While this yields multi-modality, it does not optimize a MaxEnt objective. SSPG (Cetin & Celiktutan, 2022) and SAC-NF (Mazoure et al., 2020) respectively improve the policy expressivity by modeling the policy as a Markov chain with Gaussian transition probabilities and as a normalizing flow. Due to training instability, the reported multi-goal experiments in (Cetin & Celiktutan, 2022) show that, though both models capture multimodality, they fail to maximize the expected future entropy in positive reward setups. SQL (Haarnoja et al., 2017), on the other hand, bypasses the explicit entropy computation altogether via a soft version of value iteration. It then trains an amortized SVGD (Wang & Liu, 2016) sampler from the EBM over the learned Q-values. However, estimating soft value functions requires approximating integrals via importance sampling which is known to have high variance and poor scalability. We propose a new family of variational distributions induced by a parameterized SVGD sampler from the EBM over Q-values. Our policy is expressive and captures multi-modal distributions while being characterized by a tractable entropy estimate. EBMs (LeCun et al., 2006; Wu et al., 2018) are represented as Gibbs densities \( p(x) = \exp E(x)/Z \), where \( E(x) \in \mathbb{R} \) is an energy function describing inter-variable dependencies and \( Z = \int \exp E(x) \) is the partition function. Despite their expressiveness, EBMs are not tractable as the partition function requires integrating over an exponential number of configurations. Markov Chain Monte Carlo (MCMC) methods (Van Ravenzwaaij et al., 2018) (e.g., HMC (Hoffman & Gelman, 2014), SGLD (Welling & Teh, 2011)) are frequently used to approximate the partition function via sampling. There have been recent efforts to parameterize these samplers via deepnets (Levy et al., 2017; Gong et al., 2018; Feng et al., 2017) to improve scalability. Similarly to these methods, we propose a parameterized variant of SVGD (Liu & Wang, 2016) as an EBM sampler to enable scalability to high-dimensional action spaces. Beyond sampling, we derive a closed-form expression of the sampling distribution as an estimate of the EBM. This yields a tractable estimate of the entropy. This is opposed to previous methods for estimating EBM entropy which mostly rely on heuristic approximation, lower bounds (Dai et al., 2017, 2019a), or neural estimators of mutual information (Kumar et al., 2019). The idea of approximating the entropy of EBMs via MCMC sampling by leveraging the change of variable formula was first proposed in (Dar et al., 2019b). The authors apply the formula to HMC and LD, which, as we show previously, violate the invertibility assumption. To go around this, they augment the EBM family with the noise or velocity variable for LD and HMC respectively. But the derived log-likelihood of the sampling distribution turns out to be –counter-intuitively– independent of the sampler’s dynamics and equal to the initial distribution, which is then parameterized using a flow model (details in Appendix B.2). We show that SVGD is invertible, and hence we sample from the original EBM, so that our derived entropy is more intuitive as it depends on the SVGD dynamics. SVGD-augmented RL (Liu & Wang, 2016) has been explored under other RL contexts. Liu et al. (2017) use SVGD to learn a distribution over policy parameters. While this leads to learning diverse policies, it is fundamentally different from our approach as we are interested in learning a single multi-modal policy with a closed-form entropy formula. Castanet et al. (2023); Chen et al. (2021) use SVGD to sample from multimodal distributions over goals/tasks. We go beyond sampling and use SVGD to derive a closed-form entropy formula of an expressive variational distribution. 6 CONCLUSION We propose S²AC, an actor-critic algorithm that yields a more optimal solution to the MaxEnt RL objective than previously proposed approaches. S²AC achieves this by leveraging a new family of variational distributions characterized by SVGD dynamics. The proposed distribution has high expressivity, i.e., it is flexible enough to capture multimodal policies in high dimensional spaces, and a tractable entropy estimate. Empirical results show that S²AC learns expressive and robust policies while having superior performance than other MaxEnt RL algorithms. For future work, we plan to study the application of the proposed variational distribution to other domains and develop benchmarks to evaluate the robustness of RL agents. ACKNOWLEDGMENTS Bo An is supported by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-GC-2023-009). Haipeng Chen is supported by William & Mary FRC Faculty Research Grants. REFERENCES Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Riedmiller. Maximum a posteriori policy optimisation. *arXiv preprint arXiv:1806.06920*, 2018. Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, and Marc G Bellemare. Deep reinforcement learning at the edge of the statistical precipice. *NeurIPS*, 2021. I. Ahmad and Pi-Erh Lin. A nonparametric estimation of the entropy for absolutely continuous distributions (corresp.). *IEEE Trans. Inf. Theory.*, 1976. Jan Beirlant and M.C.A van Zuijlen. The empirical distribution function and strong laws for functions of order statistics of uniform spacings. *J. Multivar. Anal.*, 1985. Jan Beirlant, Edward J Dudewicz, László Györfi, Edward C Van der Meulen, et al. Nonparametric entropy estimation: An overview. *IJSRMSS*, 1997. Laura T Bernhofen, Edward J Dudewicz, Janos Levendovszky, and Edward C van der Meulen. Ranking of the best random number generators via entropy-uniformity theory. *AJMMS*, 1996. Peter J. Bickel and Leo Breiman. Sums of Functions of Nearest Neighbor Distances, Moment Bounds, Limit Theorems and a Goodness of Fit Test. *Ann. Probab.*, 1983. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai Gym. *arXiv preprint arXiv:1606.01540*, 2016. Giuseppe Carleo and Matthias Troyer. Solving the quantum many-body problem with artificial neural networks. *Science*, 2017. Nicolas Castanet, Olivier Sigaud, et al. Stein variational goal generation for adaptive exploration in multi-goal reinforcement learning. 2023. Edoardo Cetin and Oya Celiktutan. Policy gradient with serial markov chain reasoning. *NeurIPS*, 2022. Jiayu Chen, Yuanxin Zhang, Yuanfan Xu, Huimin Ma, Huazhong Yang, Jiaming Song, Yu Wang, and Yi Wu. Variational automatic curriculum learning for sparse-reward cooperative multi-agent problems. *NeurIPS*, 2021. Thomas M Cover. *Elements of information theory*. John Wiley & Sons, 1999. Noel Cressie. Power results for tests based on high-order gaps. *Biometrika*, 1978. Bo Dai, Hanjun Dai, Arthur Gretton, Le Song, Dale Schuurmans, and Niao He. Kernel exponential family estimation via doubly dual embedding. In *AISTAT*, 2019a. Bo Dai, Zhen Liu, Hanjun Dai, Niao He, Arthur Gretton, Le Song, and Dale Schuurmans. Exponential family estimation via adversarial dynamics embedding. *NeurIPS*, 2019b. Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, and Aaron Courville. Calibrating energy-based generative adversarial networks. *arXiv preprint arXiv:1702.01691*, 2017. Jay L Devore, Kenneth N Berk, Matthew A Carlton, et al. *Modern mathematical statistics with applications*. Springer, 2012. Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In *ICML*, 2016. Edward J Dudewicz and Edward C Van Der Meulen. Entropy-based tests of uniformity. *JASA*, 1981.
oXYZJXDdo7
“Incorporating high-frequency phrases can significantly increase the total number of phrases, leading to an extremely large candidate pool” -> Won’t the low-frequency phrases significantly increase the size of the candidate pool?
RETRIEVAL IS ACCURATE GENERATION Bowen Cao♠,* Deng Cai♡,† Leyang Cui♡ Xuxin Cheng♠ Wei Bi♡ Yuexian Zou♠ Shuming Shi♡ ♠ School of ECE, Peking University ♡ Tencent AI Lab {cbw2021,chengxx}@stu.pku.edu.cn, zouyx@pku.edu.cn thisisjcykcd@gmail.com, {leyangcui,victoriabi,shumingshi}@tencent.com ABSTRACT Standard language models generate text by selecting tokens from a fixed, finite, and standalone vocabulary. We introduce a novel method that selects context-aware phrases from a collection of supporting documents. One of the most significant challenges for this paradigm shift is determining the training oracles, because a string of text can be segmented in various ways and each segment can be retrieved from numerous possible documents. To address this, we propose to initialize the training oracles using linguistic heuristics and, more importantly, bootstrap the oracles through iterative self-reinforcement. Extensive experiments show that our model not only outperforms standard language models on a variety of knowledge-intensive tasks but also demonstrates improved generation quality in open-ended text generation. For instance, compared to the standard language model counterpart, our model raises the accuracy from 23.47% to 36.27% on OpenbookQA, and improves the MAUVE score from 42.61% to 81.58% in open-ended text generation. Remarkably, our model also achieves the best performance and the lowest latency among several retrieval-augmented baselines. In conclusion, we assert that retrieval is more accurate generation and hope that our work will encourage further research on this new paradigm shift. 1 INTRODUCTION Memorization or generalization, that is the question. Standard language models (LMs) break down the text generation process into sequential token predictions (Mikolov et al., 2010; Brown et al., 2020; OpenAI, 2022). Each token is a word (or sub-word) selected from a fixed, finite, and standalone vocabulary. To make the generation more attributable and accelerate the inference speed, Lan et al. (2023) propose a method named CoG that retrieves phrases from similar contexts, where the term “phrase” refers to any contiguous text segments of variable lengths. It is worth noting that, similar to other retrieval-augmented generation frameworks (Li et al., 2022; Asai et al., 2023), CoG still employs a two-stage pipeline, specifically document retrieval followed by grounded phrase extraction. The final performance is constrained by the quality and quantity of the return from the first stage. In this paper, we propose a new paradigm that completely removes the dependence on document retrieval. To our best knowledge, our work is the first that performs text generation through direct phrase retrieval. One core challenge of adopting this novel approach is the construction of the training oracles. That is a function mapping a string of text to an action sequence for creating training examples. For a given text, there exist numerous different ways to segment it into phrases, with each potential phrase being retrievable from a vast array of documents. To better align the generation process and the supporting documents, we introduce a two-fold approach: first, we leverage linguistics-motivated heuristics to initialize the training oracles. Second, we implement a bootstrapping mechanism through iterative self-reinforcement, gradually refining the oracles with each iteration. Unlike Lan et al. (2023) which only evaluates the generation fluency in open-ended text generation, we carry out comprehensive and rigorous evaluation in a wide range of knowledge-intensive *Work done during an internship at Tencent AI Lab. †Corresponding author. tasks, e.g., open-domain question answering. Our proposed model exhibits superior zero-shot performance, outperforming the baseline method. For example, on the OpenbookQA dataset, our model dramatically improves upon base LM, presenting an increase in accuracy from 23.47% to 36.27% (Table 1). Our model also demonstrates improved quality in open-ended text generation, as evidenced by the improvement of 38.97% in the MAUVE score (Table 3). Moreover, it shows even better performance when switching to an enlarged (Table 2) or domain-specific (Table 3) phrase table, without any further training. In addition, our model attains the fastest generation speed among retrieval-augmented baselines (Table 4). We believe that our study can inspire future research to build more efficient and accurate LMs that harness the power of retrieval-based approaches. In summary, the contributions of this paper can be summarized as follows: - We introduce a new approach for language modeling that focuses on directly selecting context-aware phrases from a set of supporting documents. - We propose a novel method for decomposing text generation into sequential next-phrase retrieval by linguistics-driven heuristics and iterative self-reinforced bootstrapping. - We validate the effectiveness of our models on various downstream tasks, including open-domain and domain-specific question answering, as well as open-ended text generation, highlighting substantial improvements over standard LMs and several retrieval-augmented baselines. 2 A Unified View of Generation and Retrieval Standard language models (LMs) factorize the generation probability of a sequence \( x = [x_1, x_2, \ldots, x_n] \) into a series of conditional probabilities \( p(x) = \prod_{i=1}^{n} p(x_i | x_{<i}) \). Hence, the generation is often performed by repeatedly predicting the next token based on the generated sequence thus far (i.e., prefix). The next-token prediction probabilities are computed as \[ p(x_i | x_{<i}) = \frac{\exp(E_p(x_{<i}) \cdot E_c(x_i))}{\sum_{x' \in V} \exp(E_p(x_{<i}) \cdot E_c(x'))}, \] where \( E_p(x_{<i}) \) is a vector representation of the prefix \( x_{<i} \), \( E_c(x) \) denotes a vector representation of the token \( x \), and \( V \) stands for the token vocabulary. Through the above notations, we can see that the standard LMs can be viewed as a dual-encoder matching network connecting different prefixes and tokens. Typically, as shown in the left part of Figure 1, the source encoder \( E_p \) is implemented by a multi-layer neural network (e.g., Transformers) while the target encoder \( E_c \) is simply a token embedding layer. As seen, the design of the dual-encoder network is heavily unbalanced; The source side is much more complex than the target side. Recently, a retrieval-augmented LM, CoG (Lan et al., 2023), has been proposed. In addition to token selection, CoG also allows for phrase retrieval (i.e., variable-length \( n \)-grams) from a collection of supporting documents. From our point of view, CoG augments the target side of conventional LMs. First, the candidate pool is enlarged to include phrases of variable lengths. Second, the target encoder not only considers the candidates themselves but also their contexts. However, searching phrases from large-scale corpora is resource-intensive. Therefore, CoG adopts a two-stage search strategy: relevant documents are first retrieved to reduce the search space for phrase selection. To construct the training oracles, CoG uses a forward maximum matching algorithm to find the longest matching phrases from the retrieved documents. Despite promising results, CoG cannot guarantee to provide a globally optimal solution for phrase retrieval, and is highly dependent on the external tool for document retrieval. In contrast, we present a new paradigm, which we call CoG-2, that generates text directly through phrase retrieval. 3 The Proposed Method — CoG-2 3.1 Overview Our research aims to enhance the interpretability and factuality of language models (LMs) by transitioning from token generation to phrase retrieval. First, the semantics of phrases are enhanced by their surrounding contexts (Mikolov et al., 2013), leading to a more discriminative representation for inference. Second, each retrieved phrase can be traced back to its original document, enhancing the accountability of the output. Figure 1: Comparison between our method and standard language models. Both can be viewed as dual-encoder matching networks connecting source prefixes and target continuations. On the target side, standard language models employ an immediate embedding layer for target tokens from a fixed, finite, and standalone vocabulary. In contrast, our methods uses an expressive phrase encoder for target phrase from an editable, extensible, and contextualized phrase table. | Flag burning | sends | ... Each song has a powerful message designed to stop and make you think about your life ... | |--------------|-------|------------------------------------------------------------------------------------------| | Flag burning | sends | the song ... sends a powerful message through its lyrics, telling listeners to 'keep going' and to fight for ... | | Flag burning | sends a powerful message | ... for its "very bold move making tonight plant-based. It really sends a powerful message" Soon after, Critics' Choice and SAG ... | Figure 2: Four possible generation paths for the sentence “Flag burning sends a powerful message”. Content highlighted in blue (red) are phrases retrieved from supporting documents (from the token vocabulary). Standard LMs can be viewed as only considering the generation path at the bottom. To link a given prefix with a set of variable-length phrases, our model follows the dual-encoder structure as described in Section 2 but emphasizes a balanced design in contrast to standard LMs that heavily favor the source side (see Figure 1). Specifically, the source encoder $E_p(\cdot)$ is a multi-layer neural network (e.g., Transformer) as usual. The target encoder $E_r(\cdot)$ is also a multi-layer neural network to learn context-aware representation for phrases in supporting documents. Similar to standard LMs, we employ dot product as the matching measure. During inference, we can use efficient maximum inner product search (MIPS) algorithms (Shrivastava & Li [2014], Guo et al. [2016], Seo et al. [2019]) to retrieve from a large pool of candidate phrases. The overall framework is depicted in Figure 1. The remaining question is how to train our models. 3.2 Training Oracles We break down text generation into a series of next-phrase retrieval. Formally, each step takes the current prefix $p$ as its state, an oracle policy $\pi^*$ maps the state to an action $\pi^*(p) \rightarrow (f, s)$, where $f$ is a follow-up phrase and $s$ is a copy of the phrase $f$ in a supporting document. As illustrated in Figure 2, to create such triplets $(p, f, s)$ from raw corpora presents two challenges. First, the boundary of the phrase $f$ is unclear given a continuation can be divided in various ways. Second, the source of each phrase $s$ is unclear because a phrase can appear numerous times across a vast number of documents. On the other hand, the variety of generation paths for a given text also indicates that training oracles are crucial for optimal and quick convergence of our models. To tackle the above problems, we first present a set of linguistics-motivated heuristics to initialize the training oracles (Section 3.2.1), then describe how we allow the model to refine its generation paths in a self-reinforcement manner (Section 3.2.2). 3.2.1 Linguistics-Motivated Heuristics We start to design the training oracles through the following principles. **Syntactic Structure.** Inspired by the syntactic structure of language and its implications on language generation (Chomsky [1957], Dyer et al. [2016], Li et al. [2023b]), we restrict the phrase to a contiguous sequence of words that corresponds to a constituent unit in a syntactic parse tree. This approach ensures that each phrase possesses a relatively complete and well-defined meaning, while avoiding arbitrary word combinations that could result in semantic ambiguity or nonsensical formations (Morgan & Newport [1981]). **Distributional Sparsity.** The inclusion of high-frequency phrases significantly inflates the size of the candidate pool. This is due to our treatment of lexically identical phrases in different contexts as distinct entries in the pool. Consequently, a single high-frequency phrase could potentially introduce tens of thousands, or even millions, of entries. In our analysis of Wikipedia, we discovered that eliminating just the top 1% of high-frequency phrases could reduce the total number of entries by 50%. However, these high-frequency phrases, such as ‘as well as’, often lack specific meanings. Their inclusion may result in imbalanced training, which could adversely affect the model’s overall performance. Regarding phrases with extremely low frequency, we consider them to be rare usages with limited practical use. Including them would notably increase the complexity of training. Therefore, we also choose to exclude them. **Semantic Similarity.** Although a lexically identical copy of a phrase can be located in various places, it is crucial to account for polysemy (Cruse [1986]), as lexically identical phrases can exhibit different meanings depending on their contexts. Moreover, even when lexically identical phrases share similar meanings, subtle nuances can arise from different contexts, necessitating a thorough evaluation of semantic similarity when selecting the most appropriate matching (Min et al. [2019]). Specifically, we first run the Stanford Parser[^1] to extract constituents from the training data. We then filter these constituents based on the following criteria: (1) remove trivial constituents with labels such as WHADJP, WHADVP; (2) exclude constituents that are too short (< 2 words) or too long (> 10 words); (3) discard constituents with excessively high or low Inverse Document Frequency (IDF) (Salton & Buckley [1988]) values. Notably, we apply a more lenient IDF threshold for longer constituents. Next, we group lexically identical phrases and compute the pairwise semantic similarities using BM25 (Robertson et al. [2009]) and an off-the-shelf phrase encoder (Lee et al. [2021b]). Consequently, we can identify the most suitable next phrase for each prefix based on the scores. For more detailed information, please refer to the Appendix A. 3.2.2 Iterative Self-Reinforcement The generation paths determined by the above heuristics are model-agnostic and could be noisy and sub-optimal (Welleck et al. [2019]). To further improve performance, we allow the model to adjust its own generation paths based on the capabilities it has acquired. That is, transitioning from imitating the oracles to reinforcing its own preferences. In particular, we propose a bootstrapping algorithm to iteratively adjust the target phrases. For each prefix $p$, we first let the model retrieve the $k$-best phrases in the entire candidate pool using its current policy. Then, we choose the valid phrase with the highest semantic matching score from these $k$ phrases as the new target. If no such phrase is found, i.e., none of the $k$-best phrases match the ground-truth continuation, we retain the previous target. The above process is repeated periodically. We present an example in Appendix B. 3.3 Training Objectives We optimize our model using the InfoNCE loss (Oord et al. [2018], Karpukhin et al. [2020]), for which a negative phrase set $\mathcal{N}(p)$ is introduced for each triplet $(p, f, s)$. $$L_p = \frac{\exp(E_p(p) \cdot E_c(s))}{\exp(E_p(p) \cdot E_c(s)) + \sum_{t \in \mathcal{N}(p)} \exp(E_p(p) \cdot E_c(t))}$$ (2) The construction of the negative phrase set $\mathcal{N}(p)$ is detailed below. To preserve the ability for token-level generation, we also train our model with the standard next-token prediction loss $L_t$ (Lan et al. [2023]). The training objective is formulated as $L_p + \alpha L_t$. [^1]: https://stanfordnlp.github.io/stanza/ Negative Sampling. We incorporate two types of negative examples to improve the model’s ability to differentiate phrases: (1) In-batch negatives: We regard all other candidate phrases in the same training batch as this type of negative example. These negatives help the model learn more discriminative representations on a large scale without incurring considerable costs. (2) Hard negatives: Recall that in Section 3.2.2 we periodically update the generation targets by retrieving top-\(k\) candidate phrases for each prefix. Among these \(k\) phrases, despite one may be chosen as the new generation target, the remaining phrases can serve as strong negatives because they are likely to confuse the model. Note that the above negatives may contain false negatives, which are not chosen as targets but still make a valid follow-up. To minimize the risk, we remove all phrases that constitute a prefix of the groundtruth continuation. 3.4 Models Prefix Encoder. We treat the prefix as a sequence of tokens with previously predicted phrases split into tokens. This token sequence is encoded using the standard Transformer architecture with causal attention (Vaswani et al., 2017; Radford et al., 2019). The prefix representation is obtained through a linear projection of the last-layer representation of the final token in the sequence. Phrase Encoder. We employ a deep bidirectional Transformer (Vaswani et al., 2017; Devlin et al., 2019) to generate contextualized token representations of a supporting document. The representation of a phrase is obtained by concatenating the representations of its first and last tokens, followed by projecting the concatenated representation to the same dimension as the prefix representation. To preserve the ability to compose output using single tokens, we also add the token vocabulary to our phrase table. These standalone tokens can be considered as special phrases, and their representations are obtained through the standard embedding layer of the LM. 4 Experiment Setup 4.1 Implementation Details We train our model on the training set of MiniPile (Kaddour, 2023), and use the English Wikipedia dump March 1, 2022 as supporting documents. Specifically, we split each Wikipedia article into multiple, disjoint text blocks of up to 128 words as documents, which results in 29,488,431 documents. The size of our phrase index is 137,101,097. We use GPT-2 (Radford et al., 2019) and DensePhrases (Lee et al., 2021b) to initialize the prefix encoder and the phrase encoder, respectively. For efficiency, we solely fine-tune the prefix encoder. This avoids the computational burden of re-computing phrase embeddings associated with updating the phrase encoder. While revising the training oracles via self-reinforcement, we retrieve the top \(k = 128\) phrases for each prefix. 4.2 Inference Details During inference, we employ FAISS (Johnson et al., 2019), a library for vector similarity search and clustering, for efficient retrieval. Continuation Generation. For text generation, we directly retrieve top-\(k\) candidates from the entire phrase table (including both context-aware phrases and standalone tokens). We then apply a softmax function to the matching scores of these candidates, creating a next-phrase probability distribution (Shi et al., 2024), and use top-\(p\) sampling (Holtzman et al., 2020) for selecting the next phrase. In all experiments, we set \(k\) to 128 (see the analysis on \(k\) in Table 7 in Appendix G) and \(p\) to 0.95. To control the ratio of phrase retrieval, we filter out phrases with probabilities below a threshold. The threshold is set to \(\phi = 0.4\) if not otherwise specified. https://huggingface.co/datasets/JeanKaddour/minipile https://huggingface.co/datasets/wikipedia https://huggingface.co/princeton-nlp/densephrases-multi Likelihood Estimation. To calculate the likelihood of a given text, we approximate the likelihood by summing all possible generation paths. For instance, given the sentence "The Moon rises", the following generation paths may exist: (1) The→moon→rises; (2) The moon→rises; (3) The moon rises. The probability of each path is the product of the probabilities of all phrases (tokens) along that path. For example, the probability of the path (2) is calculated by \( p(\text{rises}|\text{The moon}) \cdot p(\text{The moon}) \). The probabilities of each step are obtained in the same way as we construct the next-phrase probability distribution for continuation generation. Note that the sum of all possible paths can be computed efficiently using dynamic programming with time complexity \( O(n^2) \), where \( n \) represents the number of tokens in the text. 4.3 Baselines We compare the proposed method with standard LM in the zero-shot setting, also drawing the following state-of-the-art retrieval-augmented methods as baselines: **Base LM** is the standard token-level language model using the Transformer (Vaswani et al., 2017) architecture. We fine-tune the pre-trained GPT-4 (Radford et al., 2019). **kNN-LM** (Khandelwal et al., 2020) is a retrieval-augmented LM that interpolates the next-token distribution of the base LM with a \( k \)-nearest neighbors (\( k \)NN) model. **RETRO** (Borgeaud et al., 2022) is a retrieval-augmented LM incorporated with a pre-trained document retriever, a document encoder and a cross-attention mechanism. **CoG** (Lan et al., 2023) is another retrieval-augmented LM that adopts a two-stage search pipeline. It first retrieves semantically-relevant documents, and then considers all \( n \)-grams within them as candidate phrases. 5 Experiments We verify the effectiveness of our methods on a set of knowledge-intensive tasks and open-ended text generation tasks without fine-tuning. 5.1 Knowledge-Intensive Tasks 5.1.1 Datasets We employ five knowledge-insensitive datasets, including three open-domain QA datasets: OpenbookQA (Mihaylov et al., 2018), ARC-Challenge (Clark et al., 2018), and TruthfulQA (Lin et al., 2022); and two domain-specific (medical) datasets: MedMCQA (Pal et al., 2022) and Med-USMILE (Jin et al., 2021). The details for these datasets can be found in Appendix C. In line with prior research (Brown et al., 2020; Sanh et al., 2022), we adopt a classification with options methodology to quantify the model performance. This approach involves presenting the model with a range of options and calculating the likelihood of each option being the correct response. The option with the highest probability is selected as the model’s prediction. We then report the accuracy of the model’s predictions. 5.1.2 Results We compare our methods with baselines in knowledge-intensive tasks across several settings. Main Results. As shown in Table 1, our model consistently outperforms various baseline models across all datasets. Compared with base LM, our model improves the accuracy of the TruthfulQA and OpenBookQA datasets from 29.73% to 34.27% and 23.47% to 36.27%, respectively. When we eliminate the phrase retrieval from our model and only use standalone tokens (Ours w/o phrase), there is a considerable drop in performance, demonstrating the effectiveness of incorporating phrase --- https://huggingface.co/gpt2 https://github.com/lucidrains/RETRO-pytorch https://github.com/gmftbyGMFTBY/Copyisallyouneed | TruthfulQA | OpenbookQA | ARC-Challenge | MedMCQA | Med-USMILE | |-----------|------------|---------------|---------|------------| | Base LM (w/o FT) | 30.27 | 22.67 | 24.52 | 27.96 | 24.89 | | Base LM | 29.73 | 23.47 | 23.92 | 28.33 | 24.19 | | kNN-LM | 30.27 | 22.93 | 24.82 | 27.96 | 24.72 | | RETRO | 27.53 | 26.13 | 22.21 | 25.68 | 25.33 | | CoG | 34.11 | 35.47 | 27.24 | 29.07 | 25.07 | | Ours | **34.27** | **36.27** | **28.27** | **29.44** | **25.69** | | Ours(w/o phrase) | 28.63 | 23.73 | 22.51 | 27.42 | 24.80 | Table 1: Experiments on knowledge-intensive tasks. Ours (w/o phrase): a variant of our model that restricts the model to only use standalone tokens without retrieving context-aware phrases. | TruthfulQA | OpenbookQA | ARC-Challenge | MedMCQA | Med-USMILE | |-----------|------------|---------------|---------|------------| | Ours | 34.27 | 36.27 | **28.27** | 29.44 | 25.69 | | w/ enlarged index | **39.59** | **37.07** | 27.14 | **31.63** | **27.87** | Table 2: Results for our model with an enlarged phrase index. retrieval in our methods. Note that the models presented in Table 1 are initialized from pre-trained LMs. To analyze the role of pre-trained models in our framework, we train all models from scratch with random initialization. The results are shown in Table 8 in Appendix G; our model outperforms the baselines across all datasets. For example, our model achieves a 12.8% absolute improvement on OpenbookQA over base LM, suggesting that our training framework is not heavily dependent on pre-trained models. To elucidate the role of phrase retrieval in knowledge-intensive tasks, we delve into a case study depicted in Appendix D. **Enlarged Phrase Index.** Recall that we exclude phrases with excessively high or low IDF values (Section 3.2.1). This strategy not only stabilizes the training process but also improves training efficiency. However, the phrases initially filtered out can be repurposed to augment our phrase index in a training-free manner. This expanded phrase index, now three times larger than the original, underscores the scalability of our approach. As evidenced in Table 2, this expansion boosts our model’s performance, such as a 5.32% increase in accuracy on TruthfulQA. This not only highlights our model’s potential to generalize to unseen phrases and documents but also emphasizes its plug-and-play feature, capable of adapting to a larger phrase table without the need for re-training. **Domain Adaption.** The plug-and-play property of the phrase index further motivates us to employ a domain-specific index for the QA tasks in the medical domain without any domain-specific training. To this end, we construct an index consisting of 3 million phrases by extracting phrases from a small text collection of the medical domain[^8]. For comparison purpose, we also fine-tune the base LM on it for fair comparison. As illustrated in Table 3, despite the considerable reduction in index size compared to the original Wikipedia index (3 million vs 137 million), our model exhibits even better performance on two medical QA datasets. This result underscores our model’s capability to enhance its performance in specific domains by leveraging a domain-specific, well-curated phrase index in a training-free manner. | MedMCQA | Med-USMILE | |---------|------------| | Base LM (FT) | 28.79 | 25.15 | | General index | 29.44 | 25.69 | | Medical index | **29.50** | **26.38** | | w/o phrase | 27.42 | 24.80 | Table 3: Results on medical datasets. 5.2 Open-Ended Text Generation We conduct open-ended text generation experiments on the test set of MiniPile [Kaddour, 2023]. For each document in the test set, we adopt the first 128 tokens as the prefix. The baselines and our model are required to generate text continuations of 128 tokens in length based on the same prefix. [^8]: https://huggingface.co/datasets/gamino/wiki_medical_terms | Model | MAUVE↑ | Coherence↓ | Diversity↑ | Latency↓ | |---------------|--------|------------|------------|----------| | Base LM (w/o FT) | 69.68 | 3.64 | 83.14 | 1.00x | | Base LM | 42.61 | 3.56 | 78.72 | 1.00x | | kNN-LM | 13.07 | 5.63 | **88.10** | 6.29x | | RETRO | 62.39 | 4.82 | 80.96 | 1.51x | | CoG | 52.27 | **2.08** | 55.04 | 4.40x | | Ours | **81.58** | 3.25 | 76.26 | **1.29x** | Table 4: Results for open-ended text generation. | Model | Fluency | Coherence | Informativeness | Grammar | |---------------|---------|-----------|-----------------|---------| | Base LM (w/o FT) | 2.91 | 2.33 | 2.35 | 3.00 | | Base LM | 2.81 | 2.37 | 2.40 | 2.79 | | Ours | **2.95** | **2.70** | **2.67** | **3.02** | Table 5: Human evaluation results. ### 5.2.1 Evaluation Metrics Following previous works (Welleck et al., 2020; Su et al., 2022; Lan et al., 2023), we utilize three automatic evaluation metrics to measure the quality of the generated texts: (i) **MAUVE** (Pillutla et al., 2021) captures the overall usefulness of the generated text by estimating the average utility of the content; (ii) **Coherence** measures the logical consistency and flow of the generated text, ensuring that the output is well-structured and easy to understand; and (iii) **Diversity** evaluates the variety of generated content, promoting the generation of unique and creative text. We report MAUVE and diversity as percentages (%). The details for these metrics can be found in Appendix E. We also measure the average time cost for a model to decode a continuation consisting of 128 tokens given a prefix of 128 tokens, referred to as **latency**. ### 5.2.2 Results As shown in Table 4, our model attains the highest MAUVE score among all models, demonstrating the high quality of the generated text. Other retrieval-augmented methods underperform base LM in the MAUVE score due to text degeneration, which aligns with findings in previous work (Wang et al., 2023). Our model also shows a strong balance between coherence and diversity. The coherence score of our model is 3.25, which outperforms most baselines except for CoG. However, we find that CoG often generates lexically similar, meaningless sentences, which is reflected in its low diversity score of 55.04%. Meanwhile, our model’s diversity score is 76.26%, which is slightly lower than some baseline models, but these models often generate incoherent sentences, as reflected in their lower coherence scores. **Human Evaluation.** To gain further insights, we randomly sample 100 cases and evaluate the results of the base LM, the base LM without fine-tuning (w/o FT), and our model from four perspectives: fluency, coherence, informativeness, and grammar. Each aspect is scored on a Likert scale from 1 to 4 (1 represents "bad", 2 stands for "fair", 3 is considered "good", and 4 signifies "very good"). We report the average scores in Table 5. As we can see, our method outperforms the base LM in all four categories, especially in coherence and informativeness. This indicates that our model, based on phrase retrieval, is better at following the preceding context and providing more informative content. As for the lower scores of the base LM compared to the base LM (w/o FT), we find that they are largely due to formatting issues. Further analysis can be found in Appendix F. **Generation Speed.** We now discuss the generation latency of different models. In Table 4, we report the relative latency, taking the base LM as the baseline. kNN-LM incurs the highest cost due to the need for interpolating the base LM’s token distribution with another distribution computed using its datastore. The CoG model also exhibits a notable overhead as it involves extracting all n-grams from the retrieved documents, applying softmax over tokens and all n-grams, and sampling from the resulting probability distribution. The RETRO model, although faster than the previous two, still... requires time for applying the representations of retrieved text chunks in attention computation. Our method stands out with the highest generation speed, since it directly retrieves and utilizes phrases. **Effect of Self-reinforcement.** Ablation studies on the effect of the Self-Reinforcement (SR) mechanism reveal significant insights into the performance of our model. In the case of knowledge-intensive tasks, we do not observe a significant impact of SR on our model’s performance (refer to Table 6 in Appendix G). This suggests that our framework is inherently effective in handling such tasks, even without the aid of SR. However, the scenario differs for open-ended text generation. Table 6 shows that models trained with SR exhibit substantial improvements in the MAUVE scores across multiple rounds, which indicates the importance of SR in enhancing the quality of text generation. After the second round, we do not observe noticeable improvements with additional rounds of SR iteration, suggesting that the model converges to its optimal state. ### 6 RELATED WORK Standard language models (LMs) (Radford et al., 2019; Brown et al., 2020) are trained to predict the next token given a text prefix. With a vast amount of training corpora and model parameters, these models show strong zero-shot performance on various downstream tasks, serving as a unified solution for natural language processing. However, scaling up the model parameters and training corpora can be very expensive and cannot be done in a timely manner. To tackle the above issues, there has been an increasing body of work that enhances the parametric LM with a non-parametric component (Li et al., 2022; Guu et al., 2020; Lewis et al., 2020; Borgeaud et al., 2022; Izacard et al., 2022) ground the next token prediction on a set of relevant documents obtained using retrieval techniques (Robertson & Zaragoza, 2009; Karpukhin et al., 2020; Khandelwal et al., 2020; Yogatama et al., 2021; Zhong et al., 2022) augment the output probability distribution with non-parametric nearest neighborhood estimation. Also, the retrieve-then-generate paradigm has been extensively studied in specific downstream tasks, such as code generation (Hashimoto et al., 2018), question answering (Ye et al., 2023; Karpukhin et al., 2020; Lee et al., 2021a), open-domain dialogue systems (Weston et al., 2018; Wu et al., 2019; Cai et al., 2019a,b), and machine translation (Khandelwal et al., 2021; Cai et al., 2021), multimodal retrieval (Jin et al., 2023; Li et al., 2023a). The work most closely related to ours is that of Min et al. (2022) and Lan et al. (2023). The former explores a similar idea in the area of masked language models to enhance natural language understanding. Lan et al. (2023), on the other hand, allows the copy of phrases from the grounding documents. However, their approach still relies on a two-stage pipeline, grounding the generation on a small set of retrieved documents only. While Lan et al. (2023) simply employs the longest common subsequence algorithm to find phrases that can be copied from the retrieved documents, we present heuristics-based and self-reinforced mechanisms to construct reliable training oracles. Also, Lan et al. (2023) only evaluates the performance on open-ended text generation tasks. ### 7 CONCLUSION We presented CoG-2, a novel retrieval-based text generation approach using context-aware phrase retrieval. Our method addresses the primary challenge of constructing training oracles through heuristic-based initialization and iterative self-reinforcement. Experiments on knowledge-intensive tasks and open-ended text generation tasks show that the proposed method outperforms the standard LM and state-of-the-art retrieval-augmented methods. Moreover, our model exhibits superior performance with either an enlarged or a smaller, domain-specific index, and achieves the lowest generation latency compared to other retrieval-augmented baselines. This work contributes to the NLP research community by promoting a paradigm shift towards more accurate generation via retrieval. As we continue to explore and refine the paradigm, we invite readers to consider the limitations of our current work, as detailed in Appendix H, to fully appreciate the scope of future research. | | MAUVE↑ | Coh.↓ | Div.↑ | |----------|--------|-------|-------| | w/o SR | 7.86 | 4.14 | 81.14 | | round1 | 64.49 | 3.23 | 70.15 | | round2 | **81.58** | 3.25 | 76.26 | Table 6: Ablation study on the effect of self-reinforcement. REFERENCES Akari Asai, Sewon Min, Zexuan Zhong, and Danqi Chen. Retrieval-based language models and applications. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts), 2023. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. Improving language models by retrieving from trillions of tokens. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, 2022. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, Wai Lam, and Shuming Shi. Skeleton-to-response: Dialogue generation guided by retrieval memory. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019a. Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, and Shuming Shi. Retrieval-guided dialogue response generation via a matching-to-generation framework. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019b. Deng Cai, Yan Wang, Huayang Li, Wai Lam, and Lemao Liu. Neural machine translation with monolingual translation memory. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021. Noam Chomsky. Syntactic structures. 1957. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv preprint, abs/1803.05457, 2018. D Alan Cruse. Lexical semantics. 1986. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016. Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, and David Simcha. Quantization based fast inner product search. In Arthur Gretton and Christian C. Robert (eds.), Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, Cadiz, Spain, May 9-11, 2016, volume 51 of JMLR Workshop and Conference Proceedings, 2016.
V8Lj9eoGl8
While the authors assert that their proposed method, ProxCoRL, is robust to the target distribution $\mu$ in contrast to ProxCuRL, the results presented in Figure 2 do not substantiate this claim. It is evident that ProxCoRL does not demonstrate effectiveness as the performance gap between ProxCoRL and ProxCuRL narrows in the case of a non-uniform task distribution (PointMass-s:2G). Could you clarify these results?
PROXIMAL CURRICULUM WITH TASK CORRELATIONS FOR DEEP REINFORCEMENT LEARNING Anonymous authors Paper under double-blind review ABSTRACT Curriculum design for reinforcement learning (RL) can speed up an agent’s learning process and help it learn to perform well on complex tasks. However, existing techniques typically require domain-specific hyperparameter tuning, involve expensive optimization procedures for task selection, or are suitable only for specific learning objectives. In this work, we consider curriculum design in contextual multi-task settings where the agent’s final performance is measured w.r.t. a target distribution over complex tasks. We base our curriculum design on the Zone of Proximal Development concept, which has proven to be effective in accelerating the learning process of RL agents for uniform distribution over all tasks. We propose a novel curriculum, ProxCoRL, that effectively balances the need for selecting tasks that are not too difficult for the agent while progressing the agent’s learning toward the target distribution via leveraging task correlations. We theoretically justify the task selection strategy of ProxCoRL by analyzing a simple learning setting with REINFORCE learner model. Our experimental results across various domains with challenging target task distributions affirm the effectiveness of our curriculum strategy over state-of-the-art baselines in accelerating the training process of deep RL agents. 1 INTRODUCTION Deep reinforcement learning (RL) has shown remarkable success in various fields such as games, continuous control, and robotics, as evidenced by recent advances in the field (Mnih et al., 2015; Lillicrap et al., 2015; Silver et al., 2017; Levine et al., 2016). However, despite these successes, the broader application of RL in real-world domains is often very limited. Specifically, training RL agents in complex environments, such as contextual multi-task settings and goal-based tasks with sparse rewards, still presents significant challenges (Kirk et al., 2021; Andrychowicz et al., 2017; Florensa et al., 2017; Riedmiller et al., 2018). Curriculum learning has been extensively studied in the context of supervised learning (Weinshall et al., 2018; Zhou & Bilmes, 2018; Elman, 1993; Bengio et al., 2009). Recent research has explored the benefits of using curriculum learning in sequential decision making settings, such as reinforcement learning and imitation learning (Florensa et al., 2017; Riedmiller et al., 2018; Wöhlke et al., 2020; Florensa et al., 2018; Racamère et al., 2020; Klink et al., 2020a,b; Eimer et al., 2021; Kamalaruban et al., 2019; Yengera et al., 2021). The objective of curriculum design in RL is to speed up an agent’s learning process and enable it to perform well on complex tasks by exposing it to a personalized sequence of tasks (Narvekar et al., 2020; Portelas et al., 2021; Weng, 2020). To achieve this objective, several works have proposed different curriculum strategies based on different design principles, such as the Zone of Proximal Development (ZPD) (Vygotsky & Cole, 1978; Chaklin, 2003) and Self-Paced Learning (SPL) (Kumar et al., 2010; Jiang et al., 2015). However, existing techniques typically require domain-specific hyperparameter tuning, involve expensive optimization procedures for task selection, or are suitable only for specific learning objectives, such as uniform performance objectives. In this work, we investigate curriculum design in contextual multi-task settings with varying degrees of task similarity, where the agent’s final performance is measured w.r.t. a target distribution over complex tasks. We base our curriculum design on the Zone of Proximal Development concept, which has proven to be effective in accelerating the learning process of RL agents for uniform... distribution over all tasks (Florensa et al., 2017; Wöhlke et al., 2020; Florensa et al., 2018; Tzannetos et al., 2023). We propose a novel curriculum strategy, PROXCoRL, that effectively balances the need for selecting tasks that are neither too hard nor too easy for the agent (according to the ZPD concept) while still progressing its learning toward the target distribution via leveraging task correlations. We have mathematically derived our curriculum strategy by analyzing a specific learning setting. The strengths of our curriculum strategy include its broad applicability to many domains with minimal hyperparameter tuning, computational and sample efficiency, easy integration with deep RL algorithms, and applicability to any target distribution over tasks, not just uniform distribution. Our main results and contributions are as follows: I. We propose a curriculum strategy, PROXCoRL, that effectively trades off the suitable task difficulty level for the agent and the progression towards the target tasks (Section 3). II. We mathematically derive PROXCoRL for the single target task setting with a discrete pool of tasks by analyzing the effect of picking a task on the agent’s learning progress in a specific learning scenario (Section 3.1). III. We propose an extension of PROXCoRL that can be applied to a wide range of task spaces and target distributions. This extension can be seamlessly integrated with deep RL frameworks, making it easy to use and apply in various scenarios (Section 3.2). IV. We empirically demonstrate that the curricula generated with PROXCoRL significantly improve the training process of deep RL agents in various environments, matching or outperforming existing state-of-the-art baselines (Section 4). 1.1 RELATED WORK Curriculum strategies based on Self-Paced Learning (SPL). In the realm of supervised learning, curriculum strategies leveraging the SPL concept attempt to strike a balance between exposing the learner to all available training examples and selecting examples in which it currently performs well (Kumar et al., 2010; Jiang et al., 2015). In the context of RL, the SPL concept has been adapted by researchers in SPDL (Klink et al., 2020a,b, 2021), SPACE (Eimer et al., 2021), and CURROT (Klink et al., 2022) by controlling the intermediate task distribution with respect to the learner’s current training progress. While both SPDL and CURROT involve a setting where the learner’s performance is measured w.r.t. a target distribution over the task space (similar to our objective), SPACE operates in a setting where the learner’s performance is measured w.r.t. a uniform distribution over the task space. SPDL and CURROT serve as state-of-the-art baselines in our experimental evaluation. The task selection mechanism varies across these methods. SPDL and CURROT operate by solving an optimization problem at each step to select the most relevant task (Klink et al., 2021, 2022). On the other hand, SPACE relies on ranking tasks based on the magnitude of differences in current/previous critic values to choose the task for the next step (Eimer et al., 2021). Furthermore, the work of CURROT (Klink et al., 2022) showcase issues about using KL divergence to measure the similarity between task distributions as used in SPDL – instead, they introduce an alternative approach by posing the curriculum design as a constrained optimal transport problem between task distributions. We provide more detailed information on the hyperparameters used in these methods in the appendix. Curriculum strategies based on Unsupervised Environment Design (UED). The UED problem setting involves automatically designing a distribution of environments that adapts to the learning agent (Dennis et al., 2020). UED represents a self-supervised RL paradigm in which an environment generator evolves alongside a student policy to develop an adaptive curriculum learning approach. This approach can be utilized to create increasingly complex environments for training a policy, leading to the emergence of Unsupervised Curriculum Design. PAIRED (Dennis et al., 2020) is an adversarial training technique that solves the problem of the adversary generating unsolvable environments by introducing an antagonist who works with the environment-generating adversary to design environments in which the protagonist receives a low reward. Furthermore, the connections between UED and another related method called PLR (Jiang et al., 2021b) have been explored in (Jiang et al., 2021a; Parker-Holder et al., 2022), resulting in demonstrated improvements over PAIRED. PLR, originally designed for procedural content generation based environments, samples tasks/levels by prioritizing those with higher estimated learning potential when revisited in the future. TD errors are used to estimate a task’s future learning potential. Unlike (Jiang et al., 2021a Parker-Holder et al., 2022), PLR does not assume control over the environment generation process, requiring only a black box generation process that returns a task given an identifier. **Curriculum strategies based on ZPD concept.** Effective teaching provides tasks of moderate difficulty (neither too hard nor too easy) for the learner, as formalized by the Zone of Proximal Development (ZPD) concept (Vygotsky & Cole, 1978; Chaiklin, 2003; Oudeyer et al., 2007; Baranes & Oudeyer, 2013; Zou et al., 2019). In the context of RL, several curriculum strategies are based on the ZPD concept, such as selecting the next task randomly from a set of tasks with success rates within a specific range (Florensa et al., 2017, 2018). However, the threshold values for success rates require tuning based on the learner’s progress and domain. A unified framework for performance-based starting state curricula in RL is proposed by Wöhlke et al., 2020, while Tzannetos et al., 2023 propose a broadly applicable ZPD-based curriculum strategy with minimal hyperparameter tuning and theoretical justifications. Nonetheless, these techniques are generally suitable only for settings where the learner’s performance is evaluated using a uniform distribution over all tasks. **Other automatic curriculum strategies.** Various automatic curriculum generation approaches exist, including: (i) formulating the curriculum design problem as a meta-level Markov Decision Process (Narvekar et al., 2017; Narvekar & Stone, 2019); (ii) learning to generate training tasks similar to a teacher (Dendorfer et al., 2020; Such et al., 2020; Matiisen et al., 2019; Turchetta et al., 2020); (iii) using self-play for curriculum generation (Sukhbaatar et al., 2018); (iv) leveraging disagreement between different agents trained on the same tasks (Zhang et al., 2020); and (v) selecting starting states based on a single demonstration (Salimans & Chen, 2018; Resnick et al., 2018). Interested readers can refer to recent surveys on RL curriculum design (Narvekar et al., 2020; Portelas et al., 2021; Weng, 2020). **Curriculum strategies based on domain knowledge.** In supervised learning, early works involve ordering examples by increasing difficulty (Elman, 1993; Bengio et al., 2009; Schmidhuber, 2013; Zaremba & Sutskever, 2014), which has been adapted in hand-crafted RL curriculum approaches (Asada et al., 1996; Wu & Tian, 2016). Recent works on imitation learning have also utilized iterative machine teaching framework to design greedy curriculum strategies (Kamalaruban et al., 2019; Yegerla et al., 2021; Liu et al., 2017; Yang et al., 2018; Zhu et al., 2018). However, all these approaches require domain-specific expert knowledge for designing difficulty measures. ## 2 Formal Setup In this section, we formalize our problem setting based on prior work on teacher-student curriculum learning (Matiisen et al., 2019). **Multi-task RL.** We consider a multi-task RL setting with a task/context space \( C \), in which each task \( c \in C \) is associated with a learning environment modeled as a contextual Markov Decision Process (MDP), denoted by \( M_c := (S, A, \gamma, T_c, R_c, P^0_c) \) (Hallak et al., 2015; Modi et al., 2018). The state space \( S \) and action space \( A \) are shared by all tasks in \( C \), as well as the discount factor \( \gamma \). Each contextual MDP includes a contextual transition dynamics \( T_c : S \times S \times A \rightarrow [0, 1] \), a contextual reward function \( R_c : S \times A \rightarrow [-R_{\text{max}}, R_{\text{max}}] \), where \( R_{\text{max}} > 0 \), and a contextual initial state distribution \( P^0_c : S \rightarrow [0, 1] \). We denote the space of environments by \( M = \{ M_c : c \in C \} \). **RL agent and training process.** We consider an RL agent acting in any environment \( M_c \in M \) via a contextual policy \( \pi : S \times C \times A \rightarrow [0, 1] \) that is a contextual mapping from a state to a probability distribution over actions. Given a task \( c \in C \), the agent attempts the task via a trajectory rollout obtained by executing its policy \( \pi \) in the MDP \( M_c \). The trajectory rollout is denoted as \( \xi = \{(s(\tau), a(\tau))\}_{\tau=0,1,\ldots} \) with \( s(0) \sim P^0_c \). The agent’s performance on task \( c \) is measured by the value function \( V^\pi(c) := \mathbb{E}\left[\sum_{\tau=0}^{\infty} \gamma^\tau \cdot R_c(s(\tau), a(\tau)) | \pi, M_c \right] \). The agent training corresponds to finding a policy that performs well w.r.t. a target distribution \( \mu \) over \( C \), i.e., \( \max_\pi V^\pi_\mu \) where \( V^\pi_\mu := \mathbb{E}_{c \sim \mu}[V^\pi(c)] \). The training process of the agent involves an interaction between two components: a student component that is responsible for policy updates and a teacher component that is responsible for task selection. The interaction happens in discrete steps indexed by \( t = 1, 2, \ldots \), and is formally described in Algorithm 1. Let \( \pi_{\text{end}} \) denote the agent’s final policy at the end of teacher-student interaction. The training objective is to ensure that the performance of the policy \( \pi_{\text{end}} \) is \( \epsilon \)-near-optimal, i.e., \( (\max_\pi V^\pi_\mu - V^\pi_{\text{end}}) \leq \epsilon \). Algorithm 1 RL Agent Training as Interaction between Teacher-Student Components 1. **Input:** RL agent’s initial policy $\pi_1$ 2. **for** $t = 1, 2, \ldots$ **do** 3. Teacher component picks a task $c_t \in C$. 4. Student component attempts the task via a trajectory rollout $\xi_t$ using the policy $\pi_t$ in $M_{c_t}$. 5. Student component updates the policy to $\pi_{t+1}$ using the rollout $\xi_t$. 6. **Output:** RL agent’s final policy $\pi_{\text{end}} \leftarrow \pi_{t+1}$. **Student component.** We consider a parametric representation for the RL agent, whose current knowledge is parameterized by $\theta \in \Theta \subseteq \mathbb{R}^d$, and each parameter $\theta$ is mapped to a policy $\pi_\theta : S \times C \times A \rightarrow [0, 1]$. At step $t$, the student component updates the knowledge parameter based on the following quantities: the current knowledge parameter $\theta_t$, the task $c_t$ picked by the teacher component, and the rollout $\xi_t = \{(s_t^{(\tau)}, a_t^{(\tau)})\}_{\tau=0}^\infty$. Then, the updated knowledge parameter $\theta_{t+1}$ is mapped to the agent’s policy given by $\pi_{t+1} := \pi_{\theta_{t+1}}$. As a concrete example, for the REINFORCE agent (Sutton et al., 1999), the knowledge parameter is updated as follows: $$\theta_{t+1} \leftarrow \theta_t + \eta_t \cdot \sum_{\tau=0}^\infty G_t^{(\tau)} \cdot g_t^{(\tau)},$$ where $\eta_t$ is the learning rate, $G_t^{(\tau)} = \sum_{\tau'=\tau}^\infty \gamma^{\tau'-\tau} \cdot R_{c_t}(s_t^{(\tau')}, a_t^{(\tau')})$, and $g_t^{(\tau)} = \nabla_\theta \log \pi_\theta(a_t^{(\tau)}|s_t^{(\tau)}, c_t)|_{\theta=\theta_t}$. **Teacher component.** At time step $t$, the teacher component selects a task $c_t$ for the student component to attempt via a trajectory rollout, as shown in line 3 in Algorithm 1. The sequence of tasks, also known as the curriculum, that is chosen by the teacher component has a significant impact on the performance improvement of the policy $\pi_t$. The primary objective of this work is to develop a teacher component to achieve the training objective in a computationally efficient and sample-efficient manner. 3 Our Curriculum Strategy ProxCoRL In Section 3.1, we mathematically derive a curriculum strategy for the single target task setting with a discrete pool of tasks by analyzing a specific learning scenario. Then, in Section 3.2, we present our final curriculum strategy that is applicable in general learning settings. 3.1 Curriculum Strategy for Single Target Task Settings In this section, we present our curriculum strategy for a setting where the task space $C$ is a discrete set and the target distribution $\mu$ is a delta distribution concentrated on a single target task $c_{\text{targ}}$. To design our curriculum strategy, we investigate the effect of selecting a task $c_t$ at time step $t$ on the agent’s performance $V_{\mu^{\pi_{\theta_t}}}$ and its convergence towards the target performance $V^*_\mu := \max_\pi V_\pi$. Therefore, we define the training objective improvement at time step $t$ and analyze this metric for a specific learning scenario. **Expected improvement in the training objective.** At time step $t$, given the current knowledge parameter $\theta_t$, the task $c_t$ picked by the teacher component, and the student component’s rollout $\xi_t$, we define the improvement in the training objective as follows: $$\Delta_t(\theta_{t+1}|\theta_t, c_t, \xi_t) := (V^*_\mu - V_{\mu^{\pi_{\theta_t}}}) - (V^*_\mu - V_{\mu^{\pi_{\theta_{t+1}}}}).$$ Additionally, we define the expected improvement in the training objective at step $t$ due to picking the task $c_t$ as follows (Weinshall et al., 2018; Kamalaruban et al., 2019; Yengera et al., 2021; Graves et al., 2017): $$I_t(c_t) := \mathbb{E}_{\xi_t|c_t}[\Delta_t(\theta_{t+1}|\theta_t, c_t, \xi_t)].$$ Based on the above measure, a natural greedy curriculum strategy for selecting the next task $c_t$ is given by: $c_t \leftarrow \arg\max_{c \in C} I_t(c)$. We aim to approximate such a curriculum strategy without computing the updated policy $\pi_{\theta_{t+1}}$. To this end, we analyze the function $I_t(\cdot)$ for REINFORCE learner model under a specific learning setting. This analysis enables us to develop an intuitive curriculum strategy by effectively combining three fundamental factors: (i) the learning potential inherent in the source task, (ii) the transfer potential between the source and target tasks, i.e., their similarity, and (iii) the potential for performance improvement in the target task. Looking ahead, it is indeed captivating to extend our investigations to more complex learning settings, where we can explore the potential for devising more sophisticated curriculum strategies. **Intuitive form of $I_t(\cdot)$.** We define $g_t : C \to \mathbb{R}^d$ as $g_t(c) := [\nabla_\theta V^{\pi_\theta}(c)]_{\theta = \theta_t}$, and $\psi_t : C \to \mathbb{R}^d$ as $\psi_t(c) := \frac{g_t(c)}{\|g_t(c)\|}$. By applying the first-order Taylor approximation of $V^{\pi_{\theta_t+1}}(c_{\text{targ}})$ at $\theta_t$, we approximate the improvement in the training objective as follows: $$\Delta_t(\theta_{t+1} | \theta_t, c_t, \xi_t) = V^{\pi_{\theta_t+1}}(c_{\text{targ}}) - V^{\pi_{\theta_t}}(c_{\text{targ}}) \approx \langle \theta_{t+1} - \theta_t, g_t(c_{\text{targ}}) \rangle.$$ The knowledge parameter update for the REINFORCE agent can be written as: $\theta_{t+1} \leftarrow \theta_t + \eta_t \cdot g_t(c_t)$, where $\mathbb{E}_{c_t | c} [g_t(c_t)] = g_t(c_t)$. Then, for the REINFORCE agent, we approximate the expected improvement in the training objective as follows: $$I_t(c_t) \approx \langle \mathbb{E}_{c_t | c} [\theta_{t+1} - \theta_t], g_t(c_{\text{targ}}) \rangle = \eta_t \cdot \|g_t(c_t)\| \cdot \|g_t(c_{\text{targ}})\| \cdot \langle \psi_t(c_t), \psi_t(c_{\text{targ}}) \rangle.$$ In the above, the term $\|g_t(c_t)\|$ corresponds to the learning potential inherent in the source task, the term $\|g_t(c_{\text{targ}})\|$ corresponds to the learning potential inherent in the target task, and the term $\langle \psi_t(c_t), \psi_t(c_{\text{targ}}) \rangle$ corresponds to the transfer potential between the source and target tasks. In the subsequent discussion, we analyze the function $I_t(\cdot)$ under a contextual bandit setting. **Contextual bandit setting.** We consider the REINFORCE learner model with the following policy parameterization: given a feature mapping $\phi : S \times C \times A \to \mathbb{R}^d$, for any $\theta \in \mathbb{R}^d$, we parameterize the policy as $\pi_\theta(a | s, c) = \frac{\exp(\langle \theta, \phi(s, c, a) \rangle)}{\sum_a \exp(\langle \theta, \phi(s, c, a) \rangle)}$, $\forall s \in S, c \in C, a \in A$. In the following, we consider a specific problem instance of contextual MDP setting. Let $M_c$ be a contextual MDP with a singleton state space $S = \{s\}$, and an action space $A = \{a_1, a_2\}$. Any action $a \in A$ taken from the initial state $s \in S$ always leads to a terminal state. Let $r : C \to [0, 1]$ be a mapping from task/context space $C$ to the interval $[0, 1]$. For any context $c \in C$, we denote the optimal and non-optimal actions for that context as $a_c^{\text{opt}}$ and $a_c^{\text{non}}$, respectively. The contextual reward function is defined as follows: $R_c(s, a_c^{\text{opt}}) = 1$, and $R_c(s, a_c^{\text{non}}) = 0$, for all $c \in C$. Further, we define $\psi : C \to \mathbb{R}^d$ as $\psi(c) := (\phi(s, c, a_c^{\text{opt}}) - \phi(s, c, a_c^{\text{non}}))$. Subsequently, for the REINFORCE agent operating under the above setting, the following theorem quantifies the expected improvement in the training objective at time step $t$: **Theorem 1.** For the REINFORCE agent with softmax policy parameterization under the contextual bandit setting described above, we have: $$I_t(c) \approx \eta_t \cdot \frac{V^{\pi_{\theta_t}}(c)}{V^*(c)} \cdot (V^*(c) - V^{\pi_{\theta_t}}(c)) \cdot \frac{V^{\pi_{\theta_t}}(c_{\text{targ}})}{V^*(c_{\text{targ}})} \cdot (V^*(c_{\text{targ}}) - V^{\pi_{\theta_t}}(c_{\text{targ}})) \cdot \langle \psi(c), \psi(c_{\text{targ}}) \rangle,$$ where $V^*(c) = \max_\pi V^\pi(c)$, and $\eta_t$ is the learning of the REINFORCE agent. A detailed proof of Theorem 1 can be found in Appendix C.1. As a more general result, we conducted an analysis of the gradient function $g_t(\cdot)$ within the context of a tree-structured contextual MDP setting, as described in Appendix C.2. This analysis establishes a connection between $\|g_t(c)\|_1$ and the term $\frac{V^{\pi_{\theta_t}}(c)}{V^*(c)} \cdot (V^*(c) - V^{\pi_{\theta_t}}(c))$. **Curriculum strategy.** Inspired by the above analysis, we propose the following curriculum strategy: $$c_t \leftarrow \arg\max_{c \in C} \frac{V^{\pi_{\theta_t}}(c)}{V^*(c)} \cdot (V^*(c) - V^{\pi_{\theta_t}}(c)) \cdot (V^*(c_{\text{targ}}) - V^{\pi_{\theta_t}}(c_{\text{targ}})) \cdot \langle \psi(c), \psi(c_{\text{targ}}) \rangle,$$ where $V^*(c) = \max_\pi V^\pi(c)$ and $\psi : C \to \mathbb{R}^d$ is a context representation mapping. Given that the term $\frac{V^{\pi_{\theta_t}}(c_{\text{targ}})}{V^*(c_{\text{targ}})}$ tends to have a significantly low value, we omit its inclusion in the above proposal for the sake of numerical stability. At time step $t$, the teacher component picks a task $c_t$ according to the above equation. The curriculum strategy involves the following quantities: (1) the agent’s relative performance on the task $c$, (2) the expected regret of the agent on the task $c$, (3) the expected regret of the agent on the target task $c_{\text{targ}}$, and (4) the correlation between the tasks $c$ and $c_{\text{targ}}$. The product of the terms (1) and (2) enforces picking tasks that are neither too hard nor too easy for the current policy (corresponding to the ZPD principle). The product of the terms (3) and (4) enforces picking tasks that are highly correlated with the target task. The curriculum strategy effectively balances these two objectives. | | SGR | POINTMASS-S:2G | POINTMASS-S:1T | BIPEDALWALKER | |----------|---------|----------------|----------------|---------------| | Reward | binary | binary | binary | dense | | Context | $\mathbb{R}^3$ | $\mathbb{R}^5$ | $\mathbb{R}^3$ | $\mathbb{R}^2$ | | State | $\mathbb{R}^4$ | $\mathbb{R}^4$ | $\mathbb{R}^4$ | $\mathbb{R}^{24}$ | | Action | $\mathbb{R}^2$ | $\mathbb{R}^4$ | $\mathbb{R}^4$ | $\mathbb{R}^4$ | | Target Dist. | $\mathbb{R}^2$ Plane | Double-Mode Gaussian | Single Task | Uniform with trivial tasks | (a) Complexity of environments (b) Illustration of the environments Figure 1: (a) provides a comprehensive overview of the complexity of the environments based on the reward signals, context space, state space, action space, and target distribution. (b) showcases the environments by providing an illustrative visualization of each environment (from left to right): SGR, POINTMASS-S and BIPEDALWALKER. 3.2 Curriculum Strategy for General Settings In this section, we extend the curriculum strategy in Eq. (1) to practical settings of interest, i.e., a general task space $C$, a general target distribution $\mu$, and $V^*(c)$ values being unknown. We begin by constructing two large discrete sets, $\hat{C}_{\text{unif}}$ and $\hat{C}_{\text{targ}}$, which are subsets of the original task space $C$. $\hat{C}_{\text{unif}}$ is obtained by sampling contexts from $C$ according to uniform distribution, while $\hat{C}_{\text{targ}}$ is obtained by sampling contexts from $C$ according to the target distribution $\mu$. For the general setting, we consider the following curriculum strategy: $$ (c_{\text{targ}}, c_t) \leftarrow \arg\max_{(c_{\text{targ}}, c) \in \hat{C}_{\text{targ}} \times \hat{C}_{\text{unif}}} \frac{V^{\pi_{\theta_t}}(c)}{V^*(c)} \cdot (V^*(c) - V^{\pi_{\theta_t}}(c)) \cdot (V^*(c_{\text{targ}}) - V^{\pi_{\theta_t}}(c_{\text{targ}})) \cdot \langle \psi(c), \psi(c_{\text{targ}}) \rangle. $$ (2) Next, we replace $V^*(c)$ with $V_{\text{max}}$, i.e., the maximum possible value that can be achieved for any task in the task space – this value can typically be obtained for a given domain. Further, when training deep RL agents, allowing some stochasticity in task selection is useful. In particular, the arg max selection in Eq. (2) can be problematic in the presence of any approximation errors while computing $V^{\pi_{\theta_t}}(\cdot)$ values. To make the selection more robust, we replace arg max selection in Eq. (2) with softmax selection and sample $(c_{\text{targ}}, c_t)$ from the distribution given below: $$ P[(c_{\text{targ}}, c_t) = (c_{\text{targ}}, c)] \propto \exp \left( \beta \cdot \frac{V^t(c)}{V_{\text{max}}} \cdot (V_{\text{max}} - V^t(c)) \cdot (V_{\text{max}} - V^t(c_{\text{targ}})) \cdot \langle \psi(c), \psi(c_{\text{targ}}) \rangle \right), $$ (3) where $\beta$ is a hyperparameter and $V^t(\cdot)$ values are obtained from the critic network of the RL agent to estimate $V^{\pi_{\theta_t}}(\cdot)$. Finally, the teacher component samples $(c_{\text{targ}}, c_t)$ from the above distribution and provides the task $c_t$ to the student component – we refer to this selection strategy as PROXCORL. 4 Experimental Evaluation In this section, we validate the effectiveness of our curriculum strategy by conducting experiments in environments selected from the state-of-the-art works of Klink et al. (2022) and Romac et al. (2021). Throughout the experiments, we utilize the PPO method from the Stable-Baselines3 library for policy optimization (Schulman et al., 2017; Raffin et al., 2021). 4.1 Environments In our evaluation, we examine three distinct environments detailed in the following paragraphs. These environments are selected to showcase the effectiveness of our curriculum strategy in handling target distributions with varying characteristics within the context space $C$. The first environment, Sparse Goal Reaching (SGR), features target distributions with uniform coverage over specific dimensions of the context space and concentrated on one dimension. For the second environment, Point Mass Sparse (POINTMASS-S) we consider two settings. In one setting, the target distribution exhibits multiple modalities. In the second setting, the target is concentrated on... a single context \( c \in C \). Lastly, the third environment has a uniform target distribution spanning the entirety of the context space. A summary and illustration of these environments are presented in Figure 1. For additional details about each environment, please refer to Appendix D.1. **Sparse Goal Reaching (SGR).** Based on the work of Klink et al. (2022), we consider a sparse-reward, goal-reaching environment in which an agent needs to reach a desired position with high precision. Such environments have previously been studied by Florensa et al. (2018). Within this environment, the contexts, denoted as \( c \in C \subseteq \mathbb{R}^3 \), encode both the desired 2D goal position and the acceptable tolerance for reaching that goal. Our primary objective centers around achieving as many goals as possible with high precision, indicated by a low tolerance threshold. In this regard, the target distribution \( \mu \) takes the form of a uniform distribution, but it is restricted to a specific 2D region within \( C \) where the tolerance (\( C\text{-Tolerance} \)) for each context is set at a minimal value of 0.05. Additionally, the presence of walls within the environment renders many of the tasks specified by \( C \) infeasible, necessitating the identification of a feasible task subspace. We generate our training tasks by randomly selecting 9900 contexts from \( C \) using uniform distribution to create \( \hat{C}_{\text{unif}} \), and by selecting 100 contexts according to the target distribution \( \mu \) to form \( \hat{C}_{\text{targ}} \). For the purpose of evaluation, we employ a separate held-out set sampled from the target distribution \( \mu \). **Point Mass Sparse (POINTMASS-s).** Based on the work of Klink et al. (2020b), we consider a contextual POINTMASS-s environment where an agent navigates a point mass through a gate of a given size towards a goal in a two-dimensional space. To heighten the challenge, we replace the original dense reward function with a sparse one, a strategy also considered in Tzannetos et al. (2023). Specifically, in the POINTMASS-s environment, the agent operates within a goal-based reward setting where the reward is binary and sparse, i.e., the agent receives a reward of 1 only upon successfully moving the point mass to the goal position. The parameters governing this environment, such as the gate’s position, width, and the ground’s friction coefficient, are controlled by a contextual variable \( c \in C \subseteq \mathbb{R}^3 \). This variable comprises \( C\text{-GatePosition}, C\text{-GateWidth}, \) and \( C\text{-Friction} \). Our experimental section explores two distinct POINTMASS-s environment settings. In the first setting, denoted as POINTMASS-s:2G, the target distribution \( \mu \) takes the form of a bimodal Gaussian distribution. Here, the means of the contextual variables \( [C\text{-GatePosition}, C\text{-GateWidth}] \) are set to \([-3.9, 0.5]\) and \([3.9, 0.5]\) for the two modes, respectively. In the second setting, POINTMASS-s:1T, the target distribution \( \mu \) is concentrated on a single context \( c \in C \). More precisely, the contextual variables \( [C\text{-GatePosition}, C\text{-GateWidth}, C\text{-Friction}] \) take on the following values: \([0.9, 0.5, 3.5]\). To construct our training tasks, we draw 20000 contexts from \( C \) using a uniform distribution, forming \( \hat{C}_{\text{unif}} \). The set \( \hat{C}_{\text{targ}} \) is created by sampling 400 contexts from \( C \) according to the target distribution \( \mu \). We employ a held-out set sampled from the target distribution \( \mu \) for evaluation purposes. **Bipedal Walker Stump Tracks (BIPEDALWALKER).** We conduct additional experiments within the TeachMyAgent benchmark for curriculum techniques, as introduced in Romac et al. (2021). In this context, we chose a bipedal agent tasked with walking in the Stump Tracks environment, which is an extension of the environment initially proposed in Portelas et al. (2019). The state space comprises lidar sensors, head position, and joint positions. The action space is continuous, and the goal is to learn a policy that controls the torque of the agent’s motors. The walker is rewarded for going forward and penalized for torque usage. An episode lasts 2000 steps and is terminated if the agent reaches the end of the track or if its head collides with the environment (in which case a reward of \(-100\) is received). Within this environment, the contextual variables \( c \in C \subseteq \mathbb{R}^2 \) control the height (\( C\text{-StumpHeight} \)) and spacing (\( C\text{-StumpSpacing} \)) of stumps placed along the track for each task. Our experimental setup is equivalent to the bipedal walker stump track environment with mostly trivial tasks, as described in Romac et al. (2021). In this setup, \( C\text{-StumpHeight} \) is constrained to the range \([-3; 3]\), while \( C\text{-StumpSpacing} \) remains within \([0; 6]\). Notably, the environment enforces the clipping of negative values for \( C\text{-StumpHeight} \), setting them to 0. Consequently, half of the tasks have a mean stump height of 0, introducing a significant proportion of trivial tasks (50%). To address the procedural task generation, we randomly draw 1000 tasks from \( C \) to construct the training task set, denoted as \( \hat{C}_{\text{unif}} \). Additionally, every four epochs, we resample 1000 tasks and update the training set \( \hat{C}_{\text{unif}} \). The set \( \hat{C}_{\text{targ}} \) is obtained by sampling 500 tasks from \( C \) according to the target distribution \( \mu \), which is uniform in \( C \). Figure 2: Performance comparison of RL agents trained using different curriculum strategies. The performance is measured as the mean return (±1 standard error) on the test pool of tasks. The results are averaged over 10 random seeds for SGR, 15 random seeds for POINTMASS-S:2G, 15 random seeds for POINTMASS-S:1T and 10 random seeds for BIPEDALWALKER. The plots are smoothed across 2 evaluation snapshots that occur over 25000 training steps. 4.2 Curriculum Strategies Evaluated Variants of our curriculum strategy. We consider two curriculum strategies as described next. First, PROXCoRL is based on Eq. (3). Throughout all the experiments, we use the following choice to compute similarity between $\psi(s)$ and $\psi(c_{\text{targ}})$: $\exp(-||c - c_{\text{targ}}||_2)$. Second, PROXCoRL-UN is a variant of it that does not take into account the target distribution $\mu$ and hence ignores the correlations. Specifically, PROXCoRL-UN drops the target task-related terms ③ and ④ derived in Eq. (1), and selects the next task according to the following distribution: $$P[c_t = c] \propto \exp \left( \beta \cdot \frac{V^t(c)}{V_{\max}} \cdot (V_{\max} - V^t(c)) \right).$$ We note that this strategy is similar to a ZPD-based curriculum strategy proposed in Tzannetos et al. (2023) for uniform performance objective. State-of-the-art baselines. SPDL (Klink et al., 2020b), CURROT (Klink et al., 2022), PLR (Jiang et al., 2021b), and GRADIENT (Huang et al., 2022) are state-of-the-art curriculum strategies for contextual RL. We adapt the implementation of an improved version of SPDL, presented in Klink et al. (2021), to work with a discrete pool of contextual tasks. PLR (Jiang et al., 2021b) was originally designed for procedurally generated content settings, but we have adapted its implementation for the contextual RL setting operating on a discrete pool of tasks. Prototypical baselines. We consider two prototypical baselines: IID and TARGET. The IID strategy samples the next task from $C$ with a uniform distribution, while the TARGET strategy samples according to the target distribution $\mu$. 4.3 Results Convergence behavior. As illustrated in Figure 2, the RL agents trained using our curriculum strategy, PROXCoRL, perform competitively w.r.t. those trained with state-of-the-art and prototypical baselines. In Figure 2a for SGR, PROXCoRL outperforms all the other techniques by a large margin. PROXCoRL selects tasks that are neither too hard nor too easy for the agent’s current policy and are also correlated with the target distribution. CURROT stands out among other strategies due to its ability to gradually choose tasks from the target distribution. Importantly, solely selecting target contexts for training is inadequate, as evidenced by the underperformance of TARGET compared to all other techniques. The results for POINTMASS-S:2G are presented in Figure 2b, where we can observe that PROXCoRL, PROXCoRL-UN, and CURROT outperform the other strategies. PROXCoRL demonstrates success in handling bimodal target distributions by alternating the selection between the modes of the target distribution. Although it initially has a slower performance than PROXCoRL-UN and CURROT, it eventually matches/surpasses their performance. Despite PROXCoRL-UN not explicitly considering the target distribution in its formulation, it progressively selects more challenging contexts and effectively encompasses the tasks from the target distribution in this scenario. For POINTMASS-S:1T, in Figure 2c, we observe that PROXCoRL quickly succeeds in the single target task compared to the other techniques. Although CURROT converges slower, it finally performs similarly to the proposed technique. For BIPEDALWALKER, Figure 2d, where the target distribution is uniform, PROXCoRL-UN achieves the best performance. This technique, by definition, considers a uniform performance objective. PROXCoRL is able to handle a uniform target distribution better than CURROT for this setting. Figure 3: (a) presents the average C-Tolerance of the selected tasks during different curriculum strategies for SGR. (b-c) present the average distance between the selected contexts C-GatePosition and C-GateWidth and the target distribution for POINTMASS-S:2G. (d) presents the two-dimensional context space of POINTMASS-S:2G. The target distribution is depicted as a black x and encodes the two gates with C-GateWidth = 0.5 at C-GatePosition = \{-3.9, 3.9\}. Each colored dot represents the context/task selected by PROXCoRL during training, where brighter colors indicate later training stages. (e) presents the two-dimensional context space of BIPEDALWALKER. The target distribution is uniform. Each colored dot represents the context/task selected by PROXCoRL during training, where brighter colors indicate later training stages. Curriculum plots. Figure 3a displays the average C-Tolerance of tasks selected from the proposed curriculum (PROXCoRL), CURROT, IID, and TARGET. Our observation reveals that both PROXCoRL and CURROT manage to reduce the average C-Tolerance below that of IID, indicating that both techniques gradually prioritize tasks that align with the target distribution. However, it is noteworthy that CURROT continues to decrease the context values to reach the target distribution, while PROXCoRL does not necessarily converge to the target distribution. This trend is similarly evident in Figures 3b and 3c, where PROXCoRL after succeeding on the target distribution returns to sampling closer to IID. Conversely, CURROT persists in reducing the context values to attain convergence with the target distribution. Figure 3d provides a visual representation of the two-dimensional context space for the POINTMASS-S:2G setting. The curriculum initially starts from larger C-GateWidth values and centered C-GatePosition values, gradually shifting towards the two modes of the target distribution in the later stages of training. In Figure 3e, we depict the two-dimensional context space for the BIPEDALWALKER setting. Despite the uniformity of the target distribution of contexts, we observe that in the later stages of training, PROXCoRL disregards trivial tasks characterized by C-StumpHeight values smaller than 0. Instead, it focuses on tasks from the remaining task space. 5 CONCLUDING DISCUSSIONS We proposed a novel curriculum strategy that strikes a balance between selecting tasks that are neither too hard nor too easy for the agent while also progressing the agent’s learning toward the target distribution by utilizing task correlation. We mathematically derived our curriculum strategy through an analysis of a specific learning scenario and demonstrated its effectiveness in various environments through empirical evaluations. Here, we discuss a few limitations of our work and outline a plan on how to address them in future work. First, it would be interesting to extend our curriculum strategy to high-dimensional context spaces in sparse reward environments. However, sampling new tasks in such environments poses a significant challenge due to the estimation of the value of all tasks in the discrete sets \(C_{\text{unif}}\) and \(C_{\text{targ}}\). Second, while our curriculum strategy uses a simple distance measure to capture task correlation, it would be worthwhile to investigate the effects of employing different distance metrics over the context space on curriculum design. REFERENCES Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, and Wojciech Zaremba. Hindsight Experience Replay. In NeurIPS, 2017. Minoru Asada, Shoichi Noda, Sukoya Tawaratsumida, and Koh Hosoda. Purposive Behavior Acquisition for a Real Robot by Vision-based Reinforcement Learning. Machine learning, 23(2-3): 279–303, 1996. Adrien Baranes and Pierre-Yves Oudeyer. Active Learning of Inverse Models with Intrinsically Motivated Goal Exploration in Robots. Robotics and Autonomous Systems, 61(1):49–73, 2013. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum Learning. In ICML, 2009. Seth Chaiklin. The Zone of Proximal Development in Vygotsky’s Analysis of Learning and Instruction. Vygotsky’s Educational Theory in Cultural Context, pp. 39, 2003. Patrick Dendorfer, Aljosa Osep, and Laura Leal-Taixé. Goal-GAN: Multimodal Trajectory Prediction based on Goal Position Estimation. In ACCV, 2020. Michael Dennis, Natasha Jaques, Eugene Vinitsky, Alexandre Bayen, Stuart Russell, Andrew Critch, and Sergey Levine. Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design. In NeurIPS, 2020. Theresa Eimer, André Biedenkapp, Frank Hutter, and Marius Lindauer. Self-Paced Context Evaluation for Contextual Reinforcement Learning. In ICML, 2021. Jeffrey L Elman. Learning and Development in Neural Networks: The Importance of Starting Small. Cognition, 48(1):71–99, 1993. Carlos Florensa, David Held, Markus Wulfmeier, Michael Zhang, and Pieter Abbeel. Reverse Curriculum Generation for Reinforcement Learning. In CORL, 2017. Carlos Florensa, David Held, Xinyang Geng, and Pieter Abbeel. Automatic Goal Generation for Reinforcement Learning Agents. In ICML, 2018. Alex Graves, Marc G Bellemare, Jacob Menick, Rémi Munos, and Koray Kavukcuoglu. Automated Curriculum Learning for Neural Networks. In ICML, 2017. Assaf Hallak, Dotan Di Castro, and Shie Mannor. Contextual Markov Decision Processes. CoRR, abs/1502.02259, 2015. Peide Huang, Mengdi Xu, Jiacheng Zhu, Laixi Shi, Fei Fang, and Ding Zhao. Curriculum Reinforcement Learning using Optimal Transport via Gradual Domain Adaptation. In NeurIPS, 2022. Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G Hauptmann. Self-Paced Curriculum Learning. In AAAI, 2015. Minqi Jiang, Michael Dennis, Jack Parker-Holder, Jakob Foerster, Edward Grefenstette, and Tim Rocktäschel. Replay-Guided Adversarial Environment Design. In NeurIPS, 2021a. Minqi Jiang, Edward Grefenstette, and Tim Rocktäschel. Prioritized Level Replay. In ICML, 2021b. Parameswaran Kamalaruban, Rati Devidze, Volkan Cevher, and Adish Singla. Interactive Teaching Algorithms for Inverse Reinforcement Learning. In IJCAI, 2019. Robert Kirk, Amy Zhang, Edward Grefenstette, and Tim Rocktäschel. A Survey of Generalisation in Deep Reinforcement Learning. CoRR, abs/2111.09794, 2021. Pascal Klink, Hany Abdulsamad, Boris Belousov, and Jan Peters. Self-Paced Contextual Reinforcement Learning. In CORL, 2020a. Pascal Klink, Carlo D’Eramo, Jan R Peters, and Joni Pajarinen. Self-Paced Deep Reinforcement Learning. In NeurIPS, 2020b.
ZMuPAOY8Oz
As a consequence, especially because the multiplication seemed to have run better in this case, please check if the multiplication mechanism is implemented such that the only meaningful operations happen at the border of the causal mask.
POSITIONAL DESCRIPTION MATTERS FOR TRANSFORMERS ARITHMETIC Anonymous authors Paper under double-blind review ABSTRACT Transformers, central to the successes in modern Natural Language Processing, often falter on arithmetic tasks despite their vast capabilities—which paradoxically include remarkable coding abilities. We observe that a crucial challenge is their naive reliance on positional information to solve arithmetic problems with a small number of digits, leading to poor performance on larger numbers. Herein, we delve deeper into the role of positional encoding, and propose several ways to fix the issue, either by modifying the positional encoding directly, or by modifying the representation of the arithmetic task to leverage standard positional encoding differently. We investigate the value of these modifications for three tasks: (i) classical multiplication, (ii) length extrapolation in addition, and (iii) addition in natural language context. For (i) we train a small model on a small dataset (100M parameters and 300k samples) with remarkable aptitude in (direct, no scratchpad) 15 digits multiplication and essentially perfect up to 12 digits, while usual training in this context would give a model failing at 4 digits multiplication. In the experiments on addition, we use a mere 120k samples to demonstrate: for (ii) extrapolation from 10 digits to testing on 12 digits numbers while usual training would have no extrapolation, and for (iii) almost perfect accuracy up to 5 digits while usual training would be correct only up to 3 digits (which is essentially memorization with a training set of 120k samples). 1 INTRODUCTION In the world of Large Language Models (LLMs) advancements, arithmetic operations stand as an important yet frequently underestimated challenge. The emergence and triumph of models like GPT-4 (OpenAI [2023]) have had a transformative impact on various sectors, illuminating new potentials in Natural Language Processing and more. However, as we delve deeper into the diverse applications of these models, arithmetic tasks continually pose obstacles ([Dziri et al., 2023]), e.g., even GPT-4’s struggles with tasks such as 4-digit multiplication. Arithmetic tasks differ significantly from typical natural language tasks. The primary distinction lies in their execution: arithmetic operations might demand intricate intermediate steps internally, whereas most other tasks might not need such extensive internal processing. Furthermore, arithmetic operations possess a distinct data format. They utilize a concise vocabulary, but the potential combinations are vast. They also showcase more predictable patterns, and each token in an arithmetic sentence can hold equal significance. This contrasts with other tasks where omitting some non-essential words might not affect the overall meaning. Given the stark differences between arithmetic and other tasks, it’s uncertain whether there’s a straightforward way to bolster a language model’s proficiency in arithmetic. Specifically, it’s unclear if the prevailing architecture—tailored mainly for natural language tasks—can efficiently and accurately tackle arithmetic tasks. Moreover, this uniqueness of arithmetic also presents an opportunity: the structured nature of arithmetic, with its transparent steps and definitive outcomes, offers an ideal framework for a deeper understanding of the models. Addressing the challenges of arithmetic tasks and enhancing the arithmetic proficiency of LLMs can also contribute to a deeper understanding of their strengths and limitations. In this work, we investigate the capabilities of language models concerning arithmetic operations, emphasizing techniques related to efficient position information utilization. Before delving into our approaches, we first identify the important challenges that arithmetic tasks face. Three such challenges, central to this study, are: **Complicated Calculation** Large-number multiplication and similar arithmetic tasks often involve intricate intermediate steps. Human solutions without using a calculator typically involve digit-wise multiplication, followed by summation. However, allowing a model to record each step can be verbose and inefficient for larger numbers. We investigate the feasibility of enabling a small transformer to directly output the product of large multiplication tasks. **Length Extrapolation** Arithmetic data, unlike natural language data, typically exhibits highly regular patterns. As a result, models often depend on absolute position information to solve such tasks (Lee et al., 2023). For instance, in an addition operation like $a_1b_1c_1 + a_2b_2c_2$, aligning digits in corresponding positions (e.g., $a_1$ in position 1 and $a_2$ in position 5) presents the simplest solution. However, for a four-digit addition task like $a_1b_1c_1d_1 + a_2b_2c_2d_2$ that hasn’t appeared in the training, it’s unclear how to handle $d_1$ at position 4. **Arithmetic and Language Integration** The poor performance of transformers on arithmetic data may be partly due to the lack of arithmetic data in the training set. However, it’s uncertain whether simply supplementing the model with arithmetic data will resolve the problem. It’s unclear if the model can successfully integrate arithmetic and natural language data due to their substantial differences. We tackle the above challenges through two types of approaches, both aimed at utilizing position information more efficiently. A first approach is to alter the positional encoding directly. In this work, we explore an alternative positional encoding, namely randomized embedding, which is simple yet efficient for arithmetic tasks. A less direct approach for better position information utilization is to modify the representation of the arithmetic data to leverage standard positional encoding differently. We investigate how altering the data format can lead the model to learn the arithmetic task differently and exhibit varying properties. In this work, we focus on small models of a GPT2-small size (124M). Our findings reveal that even such modest-sized models can adeptly execute intricate arithmetic tasks. This underscores not only the capability of the transformer architecture to handle arithmetic but also highlights that a small subset of model parameters can integrate arithmetic proficiency into language models, without affecting the model’s capacity on other tasks. We study large-number multiplication in Section 2, length generalization in Section 3, and arithmetic and language integration in Section 4. In this work, we tackle the three challenges outlined separately. However, in practice, we would need a single model that is able to show all the properties we want. This can be done by combining the approaches used in this paper, which we leave as a future work. For the purposes of this paper, we’ve maintained consistency in data size, model size, and training epochs, though it’s conceivable that our observed outcomes could be achieved with reduced data sizes, smaller models, and fewer training epochs. **Related Works** Several recent works have studied using transformers to solve arithmetic tasks. Chariton (2021, 2022) studied using transformers to do linear algebra. Zhang et al. (2022) studied variable assignment task. Qian et al. (2022) demonstrated the limitation of language models on arithmetic tasks. Hanna et al. (2023) studied the ability of GPT2 on arithmetic tasks from an interpretation viewpoint. Dziri et al. (2023) showed that even fine-tuned GPT3 has trouble performing 3-digit multiplication. Yang et al. (2023) trained a model of size 2B to perform arithmetic tasks and beat the performance of GPT4, but the accuracy obtained is not perfect even for 5-digit numbers. Lee et al. (2023) focused on the sample efficiency of using various data formats for arithmetic tasks and also studied the challenges we address in this paper, focusing on small numbers such as 3-digit addition and 2-digit multiplication. We are not aware of any previous work that is able to output the product of two 15-digit number multiplication, essentially perfect up to 12-digit, as demonstrated in our paper. Lee et al. (2023) also illustrates a model’s ability to learn arithmetic and language simultaneously, but the two types of data remain separated. A long list of works has focused on length generalization of transformers using a variety of positional encoding, including Su et al. (2021), Press et al. (2021), Li & McClelland (2022), Ruoss et al. (2023). Jelassi et al. (2023) shows that relative position embedding (Su et al., 2021), the encoder-only model can generalize to significantly longer lengths on arithmetic tasks. To solve math problems using transformers, Uesato et al. (2022), Cobbe et al. (2021) and Lightman et al. (2023) used verifier and feedback. Zhou et al. (2022) used advanced prompting technique. 2 LARGE NUMBER MULTIPLICATION Multiplication entails a sequence of intermediate steps, especially when dealing with large numbers. Modern language models like GPT-4 often find it challenging to handle these extensive multiplications (see Table 1). One test we can do is to ask the model to output the product directly, without using a scratchpad. We believe studying how the model can output the answer directly, bypassing intermediary steps, is an important research direction because in practice, outputting every step can be laborious and time-consuming. More importantly, always outputting the full steps can also prevent the model from using the most efficient method to solve the problem. In Section 2.1, we show a simple 12-layer transformer can output the product of $15 \times 15$-multiplication directly, demonstrating the immense potential of transformers. Constraining the model to use the scratchpad can force the model to adopt suboptimal strategies. While it can be hard for the model to learn to output the answers directly without using a scratchpad, our experiment indicates that given the right dataset and training regimen, it is feasible. Large number multiplication is complicated, so it can be hard for the model to detect the rules for multiplication if we train the model directly with complicated multiplication tasks. However, there exist simple cases such as one-digit multiplications. By starting with these straightforward cases, the model can initially grasp rules from the basics and then extrapolate to more complex situations. | # Digits | 1 | 2 | 3 | 4 | 5 | |----------|-----|-----|-----|-----|-----| | 1 | 1.00| 1.00| 1.00| 0.99| 1.00| | 2 | 1.00| 0.98| 0.96| 0.75| 0.58| | 3 | 1.00| 0.93| 0.59| 0.24| 0.18| | 4 | 0.98| 0.80| 0.28| 0.05| 0.01| | 5 | 0.96| 0.60| 0.12| 0.00| 0.00| Table 1: Accuracy of GPT-4 on 1-to-5-digit multiplications. We use the prompt “What is Sa * Sb?”, where Sa and Sb are the multiplicand and the multiplier. The row number represents the number of digits the multiplier has. The column number represents the number of digits the multiplicand has. For our initial attempt, we included a lot of small-number multiplication in our dataset. Our aim was to ensure the model had ample exposure to basic multiplications, enabling it to grasp multiplication rules. We create a dataset with 300k samples on 1-to-n-digit multiplication. We generate the two numbers in a way such that the number of digits of the two numbers is sampled uniformly randomly from $\{1, ..., n\}$. Although this uniform distribution ensures a balanced representation of numbers of different lengths, our emphasis leans towards smaller numbers. For example, our training set consists of around $8k$ single-digit number times a single-digit number, but there are only 100 different one-digit multiplications, so there will be a lot of repeated single-digit multiplication in our training set. On the contrary, the training set contains only less than 0.0002% of 5-digit times 5-digit numbers. In the “Basic” format, the multiplier, multiplicand, and their product are presented straightforwardly. For instance, for two numbers 73866 and 1001, we write down “7 3 8 6 6 * 1 0 0 1 # 7 3 9 3 9 8 6 6”. We show in Table 2 the performance of a randomly initialized GPT2-small trained for 300 epochs when $n = 5$ and in Table 14 the performance when $n = 10$. The model performs well on 1-2-digit multiplication, but very poorly on large numbers. Notably, we see a trend that the model performs poorly when the sum of the number of digits in the two factors is greater than 5. When the sum is smaller than 5, the training set includes more than 10% of all possible number combinations, leading to uncertainty regarding whether the model’s proficiency with smaller numbers stems from genuine understanding or mere memorization. Our findings show that emphasizing the small numbers is not enough for the model to perform well on large numbers. As the next step, we will focus on modifying the simple case, where the model 1In this paper, for every dataset used, a space is inserted before each digit. This ensures the tokenizer tokenizes each digit as an individual token. can grasp the rule, so that the model can extrapolate to the hard cases efficiently. In Section [2.1] and Section [A.1], we will present two distinct approaches designed to help the model draw connections between simpler and more complex tasks. These two approaches follow different principles and we hope they can inspire innovative simple case formulations not only for this multiplication task but for other tasks as well. ### 2.1 PADDING For datapoints on multiplications of numbers with different numbers of digits, the position of the product sign varies. Consequently, the model needs to figure out the position of the product sign first and then perform the operation based on the relative position. This makes the rule of operation unnecessarily hard. A simple modification we can adopt is to add zero-padding to the training samples so that all the numbers are given in the same length. In this way, all multiplications will follow one rule no matter how many the number of digits the two factors have. If the maximum number of digits for the factors is \( n \), we pad 0 so that both factors contain \( n \) digits and the product contains \( 2n \) digit. In addition, to make the task even easier, we can reverse the digits in the product. The rationale behind this is that to get the most significant digit of the product, we need to compute the carry from each digit accurately but to get the least significant digit, we only need to use the least significant digits of the two factors. As a result, starting with the least significant digit and progressing to the most significant digit is more straightforward. This intuitive approach has been used in previous works such as Lee et al. (2023). #### Table 3: Examples of the data format for multiplication. | Data Format | Example | |------------------------------|----------------------------------------------| | Basic | 7 3 8 6 6 * 1 0 0 1 # 7 3 9 3 9 8 6 6 | | Reverse Product | 7 3 8 6 6 * 1 0 0 1 # 6 6 8 9 3 9 3 7 | | Add Padding | 7 3 8 6 6 * 0 1 0 0 1 # 0 0 7 3 9 3 9 8 6 6 | | Add Padding + Reverse Product | 7 3 8 6 6 * 0 1 0 0 1 # 6 6 8 9 3 9 3 7 0 0 | #### Table 4: Testing accuracy for models trained on data with padding and(or) reversed product when the maximum number of digits is 5. #### Table 5: Testing accuracy for 15-maximum-digit with padding and reversed product. In Table 3, we present examples of our data format. We give more details on the dataset and the setup in Appendix B. The accuracy by GPT2-small on 300k samples achieved using padding and/or reversed product for multiplications with a maximum of 5 and 10 digits is detailed in Table 4 and Table 15 respectively. The results indicate that padding markedly boosts accuracy while reversing... the product further elevates it to near perfection. Utilizing both padding and reversed product allows us to accurately compute up to $15 \times 15$ multiplications, as shown in Table 5. This is a remarkable enhancement when compared to the non-padded data format, which encountered difficulties even with $4 \times 4$ multiplication. The benefit of padding is that it standardizes the format between basic and more complex cases, enabling them to be addressed by a singular rule and enhancing the link between them. However, when evaluating accuracy for a maximum of 20 digits in Table 16, the results for larger numbers are unsatisfactory. We did not fine-tune the parameters in our experiment, so it is possible we can achieve high accuracy for even more digits if we use a larger training set, a more optimal digit distribution, or a more fine-tuned learning rate, etc. 3 LENGTH EXTRAPOLATION In this section, we tackle a different challenge from the previous section, length extrapolation. While relying on position information can help complicated arithmetic tasks, overreliance on position can hurt generalization to additional digits. Based on the idea of reducing the reliance on absolute positional information, in Section 3.1 we delve into various data formats that can help generalization to additional digits, and in section 3.2, we investigate the role of vanilla positional embedding in arithmetic tasks and explore alternative positional embedding that better suits the needs of arithmetic tasks. 3.1 DATA FORMAT In this section, we explore the impact of different data formats on the generalization performance of models when faced with additional digits in the addition task. We propose two distinct data formats that aid in improving the models’ ability to generalize. One straightforward data format is a chain-of-thought (Wei et al., 2022) style scratchpad. In this format, we first write down the two addends of the addition, followed by the digit-wise summation steps and the final sum. However, as expected, this format struggles to generalize to numbers longer than those encountered during training. A common mistake made by the model is omitting digits while recording the digit-wise summation steps. To address this issue, we explore new data formats based on two key ideas. The first idea involves introducing random spaces between characters in the data. By doing so, we make it more challenging for the model to rely on absolute positional embedding to solve the task. This disruption encourages the model to consider other cues beyond the absolute positional information. The second idea is based on repeating more information for each digit-wise step. This approach allows the model to access additional information, enabling it to learn the actual steps rather than solely relying on memorizing positional patterns. The increased redundancy makes it harder for the model to overfit to positional information. We found that data formats based on both of these ideas significantly improve generalization performance. By incorporating random spaces and increasing information repetition, the models gain the ability to better handle numbers with more digits and exhibit enhanced generalization performance. We test our two ideas on three data formats. Table 7 shows the examples of these three data formats, where random space is based on the first idea and recursive scratchpad is based on the second idea. We give the formal definition of the data formats and the setup in Appendix C.1. We show in Table 6 the accuracy of the model on the three types of data formats. We further give a few failure examples of the models trained on each data format in Table 17. Our experimental results corroborate our conjectures. In the “Basic” data format, the model fails to generalize numbers exceeding 10 digits. If the model is given two addends that exceed this length, it simply omits some digits and outputs a result with only 10 steps. However, incorporating random spaces into the training set compels the model to move away from relying on absolute positional embedding since it can no longer retrieve the digits from fixed positions. Despite the model’s accurate prediction only extending by one digit, this progression represents a significant improvement, demonstrating a phase transition from a complete lack of generalization to some degree of it. We observe an even more significant improvement in generalization performance when we increase the information provided in each digit-wise step. This suggests that adding more information can encourage the model to learn the fundamental recursive steps required for addition, as opposed to overfitting to positional information. | Data Format | 9 Digits | 10 Digits | 11 Digits | 12 Digits | 13 Digits | |----------------------|----------|-----------|-----------|-----------|-----------| | Basic | 1.00 | 1.00 | 0.00 | 0.00 | 0.00 | | Random Space | 1.00 | 1.00 | 0.99 | 0.09 | 0.00 | | Recursive Scratchpad | 1.00 | 1.00 | 1.00 | 0.96 | 0.55 | Table 6: Testing accuracies on 9-13-digit-addition of models trained on the three data formats of 2-10-digit-addition. | Data Format | Example | |----------------------|-------------------------------------------------------------------------| | Basic | $2 \ 3 \ 9 + 8 \ 2 \ 1 : 0 + 9 + 1 = 1 \ 0 , 1 + 3 + 2 = 6 , 0 + 2 + 8 = 1 \ 0 , 1 \ 0 \ 6 \ 0$ | | Random Space | $2 \ 3 \ 9 + 8 \ 2 \ 1 : 0 + 9 + 1 = 1 \ 0 , 1 + 3 + 2 = 6 , 0 + 2 + 8 = 1 \ 0 , 1 \ 0 \ 6 \ 0$ | | Recursive Scratchpad | $2 \ 3 \ 9 + 8 \ 2 \ 1 : 0 + 9 + 1 = 1 \ 0 , 3 \ 2 + 2 \ 8 : 1 + 3 + 2 = 6 , = 6 \ 0 , 2 + 8 : 0 + 2 + 8 = 1 \ 0 , = 0 \ 6 \ 0 , = 1 \ 0 \ 6 \ 0$ | Table 7: Examples of the data format for adding 239 and 821. In addition, we would like to make the following remarks. **Pretrained vs. Randomly Initialized** We found that in this task, using a pretrained model is important for “recursive scratchpad”. Without using a pretrained model, “recursive scratchpad” won’t help generalization to additional digits. However, it does not make much difference for “random space”. For both pretrained and randomly initialized models, “basic” does not generalize to additional digits. We will have more discussion on training from scratch on the addition task in Section 3.2. **Reverse the order of the digits in the addends** For “Recursive Scratchpad”, we found that reversing the order of the digits of the addends can help the generalization performance. However, reversing the order of both the addends and the sum will not help as much. ### 3.2 Positional Embedding As we discussed in Section 3.1, the data format can greatly influence a model’s dependency on positional information, which subsequently affects its generalization capacity. In this section, we directly examine positional embedding by studying its limitations and exploring potential alternatives. To better understand the significance of positional embedding, we first consider a simpler task: given a number, the model outputs its digits in reverse order. For instance, if the input number is 12345, the output should be 54321. We evaluate the model’s performance on numbers that are longer than those in the training set and investigate challenging cases such as numbers with many repeated digits. We give the formal definition of the two types of data in Appendix C.2. As an initial step, we eliminated the positional embedding of the GPT2-small while leaving the rest of the architecture intact. It appears that for both the pre-trained model and the model trained from scratch, the removal of positional embedding enhances the generalization capacity across more digits. We show in Figure 1A the test accuracy of both models on regular and repetitive data. Figure 1A indicates that upon deletion of the positional embedding, both models exhibit an improvement in generalization by approximately two digits on the regular data. While we don’t observe a significant accuracy discrepancy between the two models on regular data, their performance on repetitive data varies considerably. As shown in Figure 1B, the repetitive data does not pose a difficult challenge for the model with positional embedding. However, it becomes notably difficult for the model trained from scratch, which achieves low accuracy even with 9-digit data. In contrast, it’s relatively simple for the pre-trained model, which manages to achieve perfect accuracy with 16-digit data. We speculate that the underlying reason is the inability to differentiate repetitive data aside from their respective positions. Without absolute positional embedding, the models must resort to alternative methods to encode positional information. Given that the pre-trained model already contains various useful pre-trained components, it has greater flexibility to address this issue. To solve this issue, we propose a simple solution targeting this issue. We mark each token using a random tag so that the model can easily use the tag to distinguish the same tokens appearing at different positions. We call this component a random embedding. We are able to show that this random tag can not only improve the generalization performance on this simple task of digit reversion but also on the more complicated task of addition. **Random Embedding** For any chosen hash dimension $n_{\text{hash}}$, we generate a $n_{\text{hash}}$-dimensional random Gaussian vector with mean 0 and identity covariance. Then, we split the Gaussian vector into $n_{\text{head}}$ many vectors $\{h_i\}_{i=1}^{n_{\text{head}}}$ each with dimension $n_{\text{hash}}/n_{\text{head}}$, set the last $n_{\text{hash}}/n_{\text{head}}$ dimensions of the input embedding of each head to be $h_i$ and keep the remaining $(n_{\text{embed}} - n_{\text{hash}})/n_{\text{head}}$ dimensions unchanged. After the final layer, we use only the first $(n_{\text{embed}} - n_{\text{hash}})/n_{\text{head}}$ dimension of each head to decode. We use newly generated random vectors for each epoch and during testing. In Figure 2, we demonstrate the improved generalization capacity of GPT2-small equipped with random embedding. Figure 2a shows that adding random embedding increases the generalization capacity on both the regular data and the repetitive data in the digit reversion task. Back to the more complicated task of addition, we show in Figure 2b that if we simply delete the positional embedding, the random initialized model does not perform well. If we keep the positional --- 2The accuracies of “No PosEmbed” differ slightly from the corresponding accuracies in Figure 1 because the accuracies are measured from different runs with different learning rates and numbers of epochs. embedding, the model does not generalize to more digits. The random embedding shows significant improvement by achieving about the same generalization capacity as the “Recursive Scratchpad” data format as we show in Section 3.1. 4 ADDITION IN NATURAL LANGUAGE SETTING In the previous sections, we focused on the case where the training data consists of solely arithmetic data. However, in practice, we need to do the arithmetic operations in the natural language setting. Training data consisting exclusively of arithmetic data is usually easy to collect as it can be generated programmatically in large quantities. On the contrary, obtaining arithmetic information embedded in natural language is a more arduous task due to its rarity in natural language content. Consequently, it is important to understand if training on purely arithmetic data can equip the model with the ability to perform arithmetic tasks within a natural language setting. In this section, we explore a task that involves mixing natural language data with purely arithmetic data to investigate the model’s ability to integrate both data types. The natural language data in this case includes dialogues on solving addition problems, with a substantial amount of samples for easy addition questions and a smaller portion for difficult ones. Such dataset structure reflects the real-world challenge of readily collecting easy tasks, while struggling to find natural language data that solves more complex problems. Alongside this, we incorporate purely arithmetic data, which is always correct and can be effortlessly produced using computer programs. Our primary objective is to examine whether the accurate arithmetic data can help the model solve the complex tasks embedded in the natural language context. Our experiments show that in our setting, training solely on arithmetic data can’t guarantee an accurate solution to the difficult problems due to the lack of difficult samples in the training set and the errors present. If we naively mix the arithmetic data with the natural language data, we don’t see a significant boost in accuracy, which shows that integrating the two types of data is challenging if they follow different formats. One obvious difficulty arises when the arithmetic data follows a certain pattern; the model can easily learn the arithmetic task relying on the positional information. However, when the arithmetic task appears in the natural language context, it won’t follow the same positional pattern, causing the model to struggle to connect the two types of data. Overreliance on positional embedding is a recurring issue when using transformers for arithmetic tasks, and this represents the main challenge we discuss in Section 3. In Section 5, we tackle this issue from two aspects: data format and alternative position embedding. We show in our experiments that similar ideas can be applied to the integration of natural language and arithmetic data, thus facilitating the merging of these two types of data. | Data Format | Examples | |----------------------|------------------------------------------------------------------------------------------------------------------------------------------| | Dialogue Data (Dia) | Student: Excuse me, can you help me with something? I need to add two numbers, 842 and 62. Teacher: Of course, let me do the calculation for you. The answer is 904. Student: Good morning! Can you help me with a math problem? I need to find the sum of 324 and 402. Teacher: Good morning! Sure thing. The answer is 726. | | Addition Data - Basic| $48 + 4 = 52$ $375 + 261 = 636$ $5051 + 8539 = 13590$ | | Addition Data - Random Space | $48 \quad 4 \quad 52$ $375 \quad 261 \quad 636$ $5051 \quad 8539 \quad 13590$ | Table 8: Examples of the natural language and arithmetic data used. We use three types of data formats, formally defined in Appendix D with examples shown in Table 9. Our dialogue dataset contains a large number of 2-3-digit addition, but not enough 4-5-digit addition while the addition data set contain a large number of both 2-3-digit addition and 4-5-digit addition. We compare in Table 9 models trained on datasets that combine dialogue data with addition data (Dia+Basic and Dia+RandomSpace) to those trained solely on dialogue data (Dia). We show the results for random initialized models trained 50 epochs and 100 epochs. Without any arithmetic data, models trained exclusively on dialogue struggle to accurately perform 4-5-digit addition. This confirms our hypothesis, given that the dialogue lacks a sufficient number of correct 4-5-digit examples. With arithmetic data, for models with the absolute position embedding, “Basic” doesn’t significantly enhance their ability to tackle addition tasks within the dialogue prompt. In contrast, using “Random space”, removing the absolute position embedding, and integrating random embedding all improve the model’s ability to leverage addition data in supporting dialogue-based addition tasks. For models that exclude absolute position embedding, as well as those with random embedding, the testing accuracy for “Basic” and “Random space” is similar when trained for long enough. Nevertheless, models can learn the “Random space” format slightly faster, as shown in Table 9a and Table 9b. Models without position embedding exhibit slightly better accuracy compared to those with random embedding in dialogue contexts. Conversely, models with random embedding outperform those lacking position embedding in pure addition scenarios, as highlighted in Table 9c and Table 9d. In conclusion, to allow language and arithmetic integration, we need either data format modification such as random space, or position embedding modification such as excluding absolute positional embedding or adding random embedding. Our conclusions here align with those in Section 3. For models with absolute position embedding, the “Basic” format is less effective due to its highly predictable pattern, allowing models to overly depend on positional information. Removing position embedding addresses this, but can create new stability issues as the model needs alternative ways to interpret position data. Introducing random embedding can offset the drawbacks of removing position embedding, resulting in a more stable performance. REFERENCES François Charton. Linear algebra with transformers. arXiv preprint arXiv:2112.01898, 2021. François Charton. What is my math transformer doing?–three results on interpretability and generalization. arXiv preprint arXiv:2211.00170, 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. *arXiv preprint arXiv:2110.14168*, 2021. Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jian, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D Hwang, et al. Faith and fate: Limits of transformers on compositionality. *arXiv preprint arXiv:2305.18654*, 2023. Michael Hanna, Ollie Liu, and Alexandre Variengien. How does gpt-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model. *arXiv preprint arXiv:2305.00586*, 2023. Samy Jelassi, Stéphane d’Ascoli, Carles Domingo-Enrich, Yuhuai Wu, Yuanzhi Li, and François Charton. Length generalization in arithmetic transformers. *arXiv preprint arXiv:2306.15400*, 2023. Nayoung Lee, Kartik Sreenivasan, Jason D Lee, Kangwook Lee, and Dimitris Papailiopoulos. Teaching arithmetic to small transformers. *arXiv preprint arXiv:2307.03381*, 2023. Yuxuan Li and James L McClelland. Systematic generalization and emergent structures in transformers trained on structured tasks. *arXiv preprint arXiv:2210.00400*, 2022. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. *arXiv preprint arXiv:2305.20050*, 2023. OpenAI. Gpt-4 technical report. *arXiv preprint arXiv:2309.05463*, 2023. Ofir Press, Noah A Smith, and Mike Lewis. Train short, test long: Attention with linear biases enables input length extrapolation. *arXiv preprint arXiv:2108.12409*, 2021. Jing Qian, Hong Wang, Zekun Li, Shiyang Li, and Xifeng Yan. Limitations of language models in arithmetic and symbolic induction. *arXiv preprint arXiv:2208.05051*, 2022. Anian Ruoss, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Róbert Csordás, Mehdi Bennani, Shane Legg, and Joel Veness. Randomized positional encodings boost length generalization of transformers. *arXiv preprint arXiv:2305.16843*, 2023. Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. *arXiv preprint arXiv:2104.09864*, 2021. Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. *arXiv preprint arXiv:2211.14275*, 2022. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. *Advances in Neural Information Processing Systems*, 35:24824–24837, 2022. Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang, Zehai He, Yuyi Guo, Jinfeng Bai, and Jie Tang. Gpt can solve mathematical problems without a calculator. *arXiv preprint arXiv:2309.03241*, 2023. Yi Zhang, Arturs Backurs, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, and Tal Wagner. Unveiling transformers with lego: a synthetic reasoning task. *arXiv preprint arXiv:2206.04301*, 2022. Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi. Teaching algorithmic reasoning via in-context learning. *arXiv preprint arXiv:2211.09066*, 2022.
UgTrngiN16
How does the LangProp framework's evolutionary algorithm-based updating process compare in efficiency and effectiveness with the gradient-based optimization used in traditional neural network training?
LangProp: A code optimization framework using language models applied to driving Anonymous authors Paper under double-blind review Abstract LangProp is a framework for iteratively optimizing code generated by large language models (LLMs) in a supervised/reinforcement learning setting. While LLMs can generate sensible solutions zero-shot, the solutions are often sub-optimal. Especially for code generation tasks, it is likely that the initial code will fail on certain edge cases. LangProp automatically evaluates the code performance on a dataset of input-output pairs, as well as catches any exceptions, and feeds the results back to the LLM in the training loop, so that the LLM can iteratively improve the code it generates. By adopting a metric- and data-driven training paradigm for this code optimization procedure, one could easily adapt findings from traditional machine learning techniques such as imitation learning, DAgger, and reinforcement learning. We demonstrate the first proof of concept of automated code optimization for autonomous driving in CARLA, showing that LangProp can generate interpretable and transparent driving policies that can be verified and improved in a metric- and data-driven way. Our code will be open-sourced and is available at https://github.com/langprop-iclr24/LangProp. 1 Introduction Building systems that can self-improve with data is at the core of the machine learning paradigm. By leveraging vast amounts of data and having an automated feedback loop to update models according to an objective function, machine learning methods can directly optimize the metrics of interest, thus outperforming systems that are handcrafted by experts. In the early history of artificial intelligence (AI), Symbolic AI, e.g., rule-based expert systems (Hayes-Roth [1985], Jackson [1986]), was a dominant and perhaps a more intuitive and explainable approach to solving tasks in an automated way, and is still widely used in fields such as medicine (Abu-Nasser [2017]) and autonomous driving (Badue et al. [2021]). However, there have been numerous successes in recent decades in machine learning, e.g., deep neural networks, that demonstrate the advantage of data-driven learning. The innovation in Large Language Models (LLMs) (Brown et al. [2020], OpenAI [2023], Touvron et al. [2023]) is a prominent success enabled by neural networks. Trained on both natural language and code, they can translate human intent and logic into executable code and back, expanding the boundaries of applying logic and reasoning. Unlike other machine learning techniques, LLMs have an affinity with Symbolic AI since they operate in discrete symbolic input-output spaces. The generated outputs are interpretable, even though the internal representation of these tokens is in a continuous embedding space. This observation led us to question if it is possible to have the best of both worlds – having an interpretable and transparent system, characteristic of Symbolic AI, which can self-improve in a data-driven manner, following the machine learning paradigm. We believe that LLMs provide the missing piece of the puzzle; the optimization mechanism. Our insight is that we can draw a direct analogy from training neural networks and train symbolic systems by leveraging the power of language models to interpret and generate scripts. Using the analogy of model training, an LLM can be used as an optimizer equivalent to stochastic gradient descent or Adam. The actual model in our paradigm is an object that handles the initialization and updates of parameters as well as the forward pass logic, where the parameters are a collection of symbolic scripts that the LLM generates. At every iteration, we perform a forward pass through the model, compare it against the ground truth in the dataset, and pass the scores and feedback into the LLM which interprets the results and updates the scripts in a way that fixes the issues raised. While many methods use LLMs for code generation, and systems such as Auto-GPT (Richards, 2023) iteratively query LLMs to execute tasks in an agent-like manner, as far as we know, we are the first to completely translate and apply the training paradigm used in machine learning for iterative code generation. We draw inspiration from MineDojo VOYAGER (Wang et al., 2023), which first introduced the idea that a collection of code generated by LLMs (skill library) can be considered as sharable and fine-tunable checkpoints. However, VOYAGER’s implementation is specific to Minecraft, and additional work is needed to apply its approach to other domains. We propose LangProp, a general code optimization framework that is easily adaptable to many application domains. Autonomous driving is a key area in which model interpretability and transparency are critical. We consider LangProp to be a valuable proof of concept for building interpretable and language-instructable systems in a more automated and learnable way. We tested our hypotheses that (a) LangProp can generate interpretable code that learns to control a vehicle, (b) LangProp can improve driving performance with more training data in comparison to zero-shot code generation, and (c) we can easily transfer training paradigms from machine learning to LangProp such as imitation learning, reinforcement learning (Sutton & Barto, 2018) and DAgger (Ross et al., 2011). 2 RELATED WORK 2.1 LLMs FOR CODE GENERATION Transformer-based models (Vaswani et al., 2017) have shown outstanding performance in code generation tasks (Chen et al., 2021; Li et al., 2022; Xu et al., 2022; Nijkamp et al., 2023; Fried et al., 2023). In particular, general purpose LLMs (Ouyang et al., 2022; OpenAI, 2023) have shown remarkable capabilities of code generation, translating natural language into code, and vice versa. However, there is no guarantee that the generated code is error-free. Benchmarks have been suggested to evaluate LLMs on the quality of code generation (Chen et al., 2021; Liu et al., 2023). Code generation with execution is especially relevant to our work. Cobbe et al. (2021) and Li et al. (2022) used majority voting on the execution results to select code from a pool of candidates, but this is prone to favoring common erroneous solutions over infrequent correct solutions. Ni et al. (2023) suggested a ranking mechanism using a learned verifier to assess code correctness. Given the code, its specification, and its execution results, it computes the rankings based on the code correctness and code generation probability. CLAIRIFY (Skreta et al., 2023) implemented automatic iterative prompting that catches errors and provides feedback to the LLM until all issues are resolved. Tangentially related fields are Automated Program Repair (APR) (Xia & Zhang, 2022; Xia et al., 2022), unit test generation (Roziere et al., 2022), and planning applied to LLMs and code generation (Le et al., 2022; Zhang et al., 2023). APR is typically solved as a text infill task by identifying an erroneous block of code, masking it out, and querying an LLM, providing the surrounding code as context. Planning for LLMs formulates code generation as a sequence generation task and applies Reinforcement Learning techniques. While these approaches are orthogonal to our approach of iteratively generating code using a pre-trained general-purpose LLM as an optimizer, findings from these fields may be compatible with LangProp for future work. 2.2 LARGE LANGUAGE MODELS FOR AUTOMATING COMPOSITIONAL TASKS LLM-powered agents have demonstrated sophisticated planning capabilities. Sequential prompting with the history of observation, action, and the reason for the action was proposed by ReAct (Yao et al., 2023) as an improvement to Chain-of-Thought prompting (Wei et al., 2022), which has also been applied to autonomous driving (Fu et al., 2023). Auto-GPT (Richards, 2023) automated tasks by iteratively generating a sequence of subtasks in finer detail until they are executable. A similar strategy was applied to robotics (Huang et al., 2022). SayCan (Ahn et al., 2022) used LLMs to generate candidate subgoals and assessed their affordances with a value function given visual observations to ground the agent’s behavior. VIMA (Jiang et al., 2023) and PaLM-E (Driess et al., 2023) demonstrated profound reasoning and execution capabilities on multi-modal tasks such as Visual Q&A and robotics by fine-tuning LLMs to allow multi-modal prompting. Inner Monologue (Huang et al., 2023) used environment and user feedback to replan for embodied tasks. Unlike our method, the above methods require an LLM in the loop during inference, whereas our method only requires access to an LLM during the code optimization stage. (Liang et al., 2023) and (Singh et al., 2023) used LLMs to directly generate code for robotics, while ViperGPT (Didac et al., 2023) and Vis-Prog (Gupta & Kembhavi, 2023) composed pre-trained vision-and-language models to solve challenging vision tasks which require reasoning and domain knowledge. However, none of the above methods implement code optimization via iterative prompting. Our method is inspired by VOYAGER (Wang et al., 2023), which integrates environment feedback, execution errors, and self-verification into an iterative prompting mechanism for embodied control in Minecraft. VOYAGER maintains a skill library, a collection of verified reusable code, which can be considered as checkpoints. However, there is no mechanism to optimize or remove a sub-optimal skill once it has been added to the library. We address this limitation and present a more general code optimization framework that can be applied to a variety of domains, e.g. autonomous driving. 2.3 Autonomous Driving and the CARLA Benchmark Approaches to Autonomous Driving can be broadly classified into modular systems and end-to-end systems (Yurtsever et al., 2020). Most systems take a modular approach (Urmson et al., 2008; Levinson et al., 2011; Wei et al., 2013; Maddern et al., 2017), which has human-defined rules that orchestrate separately engineered components for localization and mapping, object detection, tracking, behavior prediction, planning, and vehicle control. Such systems allow compartmentalization and better interpretability, but can be complex and require domain knowledge to maintain and update. Another challenge is error propagation (McAllister et al., 2017), i.e. the upstream outputs can be erroneous and must be corrected downstream. Recent work has harnessed end-to-end learning to address these issues. Imitation learning (IL) (Bojarski et al., 2016; Bansal et al., 2018) optimizes the policy to match actions taken by experts, and is the most widely used approach. However, its performance is upper-bounded by the expert. Deep reinforcement learning has also shown successes in simulation (Sallab et al., 2017), on the road (Kendall et al., 2019), and in combination with IL (Lu et al., 2022). Our work combines both the benefit of interpretability of expert systems while also taking a data-driven approach, exposing the system to potential failure modes and adverse scenarios during training time and iteratively optimizing the system towards a well-defined driving metric so that the resulting system is robust to adverse events and potential errors in intermediate components. CARLA (Dosovitskiy et al., 2017) is a widely used open-sourced 3D simulator for autonomous driving research. Many prior works on CARLA have open-sourced their expert agents. Roach (Zhang et al., 2021) trained a PPO agent (Schulman et al., 2017) on handcrafted reward signals with privileged information. The heavy lifting is done at the reward shaping level, where hazardous agents are identified and the desired speed and pose are computed. Roach expert is also used in MILE (Hu et al., 2022) and TCP (Wu et al., 2022), where TCP has an additional emergency braking upon detecting potential collisions. TransFuser (Chitta et al., 2022), InterFuser (Shao et al., 2023) and TF++ (Jaeger et al., 2023) implement their handcrafted expert systems, either using cuboid intersections or line intersections for hazard detection. TransFuser also introduced the Longest6 benchmark, which consists of longer routes compared to the official CARLA benchmark and is less saturated. 3 The LangProp Framework The LangProp framework, visualized in Figure 2, addresses a general task of optimizing code on any given metric of success in a data-driven way, similar to how a neural network is optimized on an objective function. LangProp performs iterative prompting to improve code performance, using the inputs, outputs, exceptions, metric scores, and any environmental feedback to inform the LLM upon updates. The updates in LangProp are performed using a form of an evolutionary algorithm (Bäck & Schwefel, 1993). The following sections describe the key concepts in LangProp in more detail. 3.1 Model definition The LangProp model consists of a setup prompt, an update prompt, and a collection of executable code generated by the LLM, which we refer to as a policy. While neural models are parameterized by floating-point weights, the parameters of a LangProp model is the collection of policies. Each policy is associated with an executable script as well as a statistics tracker, which updates the priority, an aggregate measure of the policy’s performance with respect to the training objective. The priority is used to rerank the policies so that the best-performing policies are used for updates and inference. Figure 1: An overview of the LangProp framework, which consists of a LangProp model, an LLM optimizer, and a LangProp trainer. During training, the LLM generates and updates the policy scripts which are evaluated against a training objective. The performances of the policies are monitored and aggregated over time by a policy tracker as priorities, which is then used to rerank the policies. Policies with higher priorities are selected for updates, and the best policy is used for inference. 3.1.1 Policy setup The initialization of the policies is done similarly to zero-shot code generation. The definition and specification of the requested function is given, for example, the docstring of the function including the names and types of the inputs and outputs, what the function is supposed to achieve, and a template for the function. We also adopt Chain-of-Thought prompting (Wei et al., 2022). An example of a setup prompt can be found in Appendix A.1. The response from the LLM is parsed to extract the solution code snippet. Multiple responses are collected to ensure the diversity of the initial policies. 3.1.2 Training objective The advantage of LangProp over typical usage of LLMs for code generation is that it performs code optimization in a metric- and data-driven manner. In many tasks, it is easier to provide a dataset of inputs and ground truth corresponding outputs rather than to accurately specify the requirements for a valid solution or write comprehensive unit tests. Similar to how neural networks are trained, the user defines an objective function that measures how accurate the policy prediction is against the ground truth, e.g. L1 or L2 loss. A penalty is given if the policy raises an exception. 3.1.3 Forward-pass and feedback Similar to training neural networks, LangProp assumes a dataset of inputs and associated ground truth labels for supervised learning (or rewards/returns for reinforcement learning, discussed in Section 4.3). For every batch update, the inputs are fed into all the policies currently in the LangProp model to make predictions, equivalent to a forward-pass. For each policy, the prediction is evaluated by the objective function which returns a score. If an exception is raised during execution of a policy script, it is caught by the model and an exception penalty is returned as a score instead. The execution results, which include the score, exception trace, and any print messages from the execution, are fed back into the model and are recorded by the policy tracker. This is analogous to how parameters in a neural network are assigned gradients during back-propagation (see Appendix A.9). This information stored by the tracker is used in the policy update step in Section 3.1.5. 3.1.4 Priority The priority is, simply put, an average of scores with respect to the training objective. In case a small batch size is required for faster computation, a running average of the scores is used as the priority rather than ranking the policies’ performance based on scores from the current batch alone, which may result in highly stochastic results. This is sufficient for supervised learning with a fixed size dataset. As discussed later in Section 4.3, however, a more complex training method such as reinforcement learning or DAgger (Ross et al., 2011) has a non-stationary training distribution. Therefore, we use exponential averaging with a discount factor of $\gamma \in (0, 1]$ following Equation (1). $$P_{i,k} = \left( \sum_{j=1}^{N_k^B} s_{i,j,k} + W_{i,k-1} P_{i,k-1} \right) / (N_k^B + W_{i,k-1}), \quad W_{i,k} = \gamma(N_k^B + W_{i,k-1})$$ Here, $N_k^B$, $P_{i,k}$ and $W_{i,k}$ are the batch size, priority, and priority weighting of the $k$-th batch for the $i$-th policy, respectively, and $s_{i,k}$ is the objective score of the $i$-th policy for the $j$-th element in the $k$-th batch. Initial conditions are $P_{i,0} = 0$ and $W_{i,0} = 0$. By weighting recent scores higher, we ensure policies with higher priorities have high performance on the most up-to-date dataset. ### 3.1.5 Policy Reranking and Update This step updates the model based on the most recent forward-backward pass and updated priorities. This corresponds to the optimization step in neural network training, where parameters are updated based on gradients computed on the most recent batch. First, the policies are reranked by the priorities and the top $N^K$ number of policies are kept, out of which the top $N^U$ policies are selected for updates. For each of these policies, the policy tracker storing records of inputs, outputs and scores is queried for the worst-case input-output pairs in the training batch that had the minimum score, along with any exception or print messages during the execution. This information, together with the old policy script, is embedded into the update prompt by a prompt template engine (Section 3.2). The update prompt is passed to the LLM, which returns $N^R$ responses containing new policy scripts. After the model update, there are $N^U \times N^R$ new policies, as well as up to $N^K$ old policies. To initialize the new policies with sensible priorities, extra forward-backward passes are performed on these policies with the same batch of samples used for the model update. Finally, all policies are sorted according to their priorities, ready for inference or training on a new batch. ### 3.2 Prompt Template Engine During the policy update stage, we require a dynamic prompting mechanism to embed information about the input, predicted output, ground truth, exception, print messages, and the policy script to be revised. The logic to generate these prompts is sometimes complex, for example, predictions are only made when there are no exceptions. To enable flexible prompt generation while avoiding any hardcoding of the prompts in the codebase, we developed a simple yet powerful prompt template that can parse variables, execute Python code embedded within the prompt, and import sub-prompts from other files, and will be included in our open-sourced solution. The update prompt example shown in Appendix A.2 makes extensive use of the policy template engine’s capabilities. ### 3.3 Training Paradigm LangProp mirrors the code abstraction of PyTorch (Paszke et al., 2019) and PyTorch Lightning (Falcon, 2019) for the module and trainer interfaces, respectively. This allows LangProp to be task-agnostic, making it easily applicable to a range of domains and use cases. Moreover, it helps highlight the similarities between neural network optimization and code optimization using LangProp and facilitates a smooth integration of other training paradigms for neural network training. Importantly, LangProp’s internal implementation does not depend on PyTorch or PyTorch Lightning. LangProp supports PyTorch datasets and data loaders, as well as any iterable dataset object for training and validation. Listing 1 shows an example of a standard LangProp training script. ```python train_loader = DataLoader(train_data, batch_size, shuffle=True, collate_fn=lambda x: x) val_loader = DataLoader(val_data, batch_size, shuffle=True, collate_fn=lambda x: x) model = LPModule.from_template(name=model_name, root=model_root) trainer = LPTuner(model, RunConfig(run_name=run_name)) trainer.fit(train_loader, val_loader, epochs=epochs) # train model ``` Listing 1: Training a LangProp model with a LangProp trainer. After every training step on a mini-batch, the trainer saves a checkpoint, which consists of the setup prompt, update prompt template, the currently kept policy scripts (maximum of $N^K + N^U \times N^R$), and the statistics monitored by the policy tracker (priorities $P$ and priority weights $W$). Since these can be stored as text or JSON files, the size of a checkpoint is in the order of a few hundred kilobytes. Checkpoints can be used to resume training, fine-tune the model, or for inference. ```python 1 model = LPModule.from_checkpoint(checkpoint) # load checkpoint 2 model.setup(config=RunConfig()) 3 prediction = model(*input_args, **input_kwargs) # make prediction ``` Listing 2: Inference with a pre-trained LangProp model checkpoint. Listing 2 shows how a LangProp checkpoint can be loaded and used for inference. The policy with the highest priority is used for inference. Since policies are parameterized as executable code, the use of an LLM is only required during training, not during inference. Since querying LLMs is both expensive and slow, this is a key advantage of the LangProp approach, which makes integration of LLMs more feasible for real-time applications, such as robotics and autonomous driving. 4 LangProp Applied to Driving in CARLA In this section, we describe how the LangProp framework can be used in the context of autonomous driving. We chose the CARLA environment (Dosovitskiy et al., 2017) as a benchmark since (a) autonomous driving requires interpretable driving policies, (b) CARLA has a rich collection of human-implemented expert agents to compare against, and (c) a metric-driven learnable approach would be beneficial since driving decisions such as when to lane-change or to give way are challenging planning problems, and even human-implemented experts have sub-optimal performance. 4.1 Expert We implemented our expert agent for data collection and to provide pseudo-ground-truth actions to train the LangProp agent with imitation learning. While TransFuser (Chitta et al., 2022) and TF++ (Jaeger et al., 2023) use a computationally expensive 3D bounding box collision detection algorithm, and InterFuser (Shao et al., 2023) uses line collision which is faster but less accurate, we use an efficient polygon collision detection algorithm between ground-projected bounding boxes. By extrapolating the motion of the ego vehicle and the actors into the future and checking for any polygon intersections, the safety margins to the pedestrians and vehicles are calculated. Together with the distance to the nearest traffic light and/or stop sign, the target speed is determined to give a 2 s margin. Steering is evaluated by calculating the angle to the next waypoint, which is 4 m ahead of the ego vehicle. A PID controller is used for low-level control to convert the target speed and angle to throttle, brake, and steering. For more implementation details, see Appendix B.2. 4.2 LangProp Agent Similarly to our expert and all the baseline experts, we provide privileged information from the CARLA simulator to the agent. While we manually convert the bounding box coordinates of actors in the scene into the ego-relative frame of reference, we let LangProp handle these computations, providing everything in absolute world coordinates. We provide the location, orientation, speed, length, and width of the ego vehicle as well as for other vehicles and pedestrians that are within the range of 50 m. Importantly, we do not filter out actors even if they are irrelevant to the driving agent. We also provide the target waypoint (4 m ahead, used by other baseline experts) and the distances to a red traffic light and stop sign along the current lane if they exist. Given this information, the LangProp policy is expected to return a desired speed level ("MOVE": 6 m/s, "SLOW": 1 m/s, "STOP": 0 m/s), and a turning angle for the ego vehicle. These are passed to an external PID controller to convert them into throttle, brake, and steering. A more detailed explanation of the function definition is given in Listing A.5, which is an extract of the setup prompt used in the LangProp model. Given the function definition as a docstring, an LLM generates policy script. While it is straightforward for the policy to directly predict the speed or acceleration as numeric values, this makes the task of designing a suitable loss function for imitation learning more challenging and open-ended. Therefore, we opted for a categorical output which simplifies the scoring function. Figure 2: An overview of the LangProp agent training pipeline. The LangProp model is updated on a dataset that includes both offline expert data as well as online LangProp data annotated with expert actions, similar to DAgger. The agent is given negative rewards upon infraction. candidates that satisfy the specification and updates them following the procedures in Section 3. We use GPT 3.5 Turbo 16k model, provided by OpenAI’s Chat Completion API (OpenAI, 2022). 4.3 Imitation Learning, DAgger, and Reinforcement Learning We explore three major training paradigms often used to train embodied agents - imitation learning (IL), DAgger (Ross et al., 2011), and reinforcement learning (RL). In imitation learning, the accuracy of the policy outputs is measured against ground truth expert actions for a pre-collected dataset. Imitation learning is known to have issues with out-of-distribution inputs at inference time, since the expert’s policy is used to collect the training data, while the learned policy is used for rollouts at inference time. DAgger addresses this issue by labeling newly collected online data with expert actions, and adding them to the expert-collected offline data to form an aggregate replay buffer. Both CARLA and the LangProp agent run at a frame rate of 20 Hz. LangProp adds training samples to the replay buffer every 10 frames, and a batch update is performed after every 100 new samples. While DAgger solves the issue of distribution mismatch, the performance of the learned policy is still upper-bounded by the accuracy of the expert. It also does not take into account that certain inaccuracies are more critical than others. In the context of autonomous driving, actions that result in infractions such as collisions should be heavily penalized. Reinforcement Learning offers a way of training a policy from reward signals from the environment, which is convenient since we can directly assign penalties upon any infractions according to the CARLA leaderboard (CARLA, 2020). While RL typically optimizes for maximum returns (discounted sum of future rewards), we simplify the setting by assigning an infraction penalty if there is an infraction in the next 2 s window. The agent monitors infractions every 10 frames, and triggers an update upon infractions. Since infraction penalties are very sparse, and will become rarer as the policies improve, we adopt two strategies; (a) we combine RL training with imitation learning training that provides denser signals, and (b) we sample training data with infractions with 100 times higher sampling probability. The expert is only imitated upon no infractions, or if the expert was not the behavior policy which incurred the infraction, and an infraction cost is only given when the current policy takes the same action as the behavioral policy which caused the infraction when the expert chose a different action. For more details on the training objective, see Appendix C.2. 5 Experiments We compared our LangProp agent against RL experts with privileged information (Roach (Zhang et al., 2021), TCP (Wu et al., 2022)) as well as human-implemented experts (TransFuser (Chitta et al., 2022), InterFuser (Shao et al., 2023), TF++ (Jaeger et al., 2023), ours). We used the official training and testing routes provided by the CARLA leaderboard (CARLA, 2020), as well as Table 1: Driving performance of expert drivers in CARLA version 0.9.10. The driving score is a product of the route completion percentage $R$ and the infraction factor $\bar{I}$. IL and RL stand for imitation learning and reinforcement learning. DAgger uses both online and offline data. | Method | Training routes | Testing routes | Longest6 | |-----------------|-----------------|---------------|----------| | | Score ↑ $R$ ↑ $\bar{I}$ ↑ | Score ↑ $R$ ↑ $\bar{I}$ ↑ | Score ↑ $R$ ↑ $\bar{I}$ ↑ | | Roach expert | 57.8 95.9 0.61 | 63.4 98.8 0.64 | 54.9 81.7 0.67 | | TCP expert | 64.3 92.3 0.71 | 72.9 93.2 0.77 | 46.9 63.1 0.76 | | TransFuser expert| 69.8 94.5 0.74 | 73.1 91.3 0.80 | 70.8 81.2 0.88 | | InterFuser expert| 69.6 83.1 0.86 | 78.6 81.7 0.97 | 48.0 56.0 0.89 | | TF++ expert | 90.8 95.9 0.94 | 86.1 91.5 0.94 | 76.4 84.4 0.90 | | **Our expert** | 88.9 92.8 0.95 | **95.2** 98.3 0.97 | 72.7 78.6 0.92 | | LangProp: Offline IL | 0.07 0.37 0.97 | 0.00 1.00 | 0.00 1.00 | | LangProp: DAgger IL | 36.2 94.5 0.40 | 41.3 95.3 0.44 | 22.6 87.4 0.30 | | LangProp: DAgger IL/RL | 64.2 90.0 0.72 | 61.2 95.2 0.64 | 43.7 71.1 0.65 | | LangProp: Online IL/RL | **70.3** 90.5 0.78 | **80.9** 92.0 0.89 | **55.0** 75.7 0.73 | the Longest6 benchmark (Chitta et al., 2022) that has longer routes with denser traffic. See Appendix D.1 for more details on the benchmark and the routes and towns used. For the LangProp agent, only the training routes are used for imitation/reinforcement learning at training time, and the saved checkpoints are used for inference during evaluation runs. The results are shown in Table 1. 5.1 Expert and LangProp agents Our expert and the TF++ expert significantly outperformed all other expert agents in all routes, and our expert outperformed TF++ by a margin on the test routes. The core collision avoidance logic is just 100 lines of code, with additional preprocessing and tooling for data collection. From the breakdown of the scores, our expert seems to prioritize safer driving with fewer infractions (higher infraction factor $\bar{I}$) by trading off route completion compared to TF++ in the Longest6 benchmark. For the LangProp agent, we observe that training using offline samples, DAgger, and online samples improves performance in this order. Adding the infraction penalties as an additional reinforcement learning objective further improved the performance. The best-performing agent, LangProp trained on online data with IL and RL, achieved better performance than the Roach expert (trained with PPO) as well as the TransFuser and InterFuser experts (both written by researchers) on all benchmarks apart from TransFuser on the Longest6 benchmark. The result has two important implications. Firstly, the code selection metric (the training objective) plays a large role in the ultimate performance of the code. This is an important finding since prior work on code generation mostly focused on error correction given exceptions. Our results demonstrate that for complex tasks, it is important to treat code generation as an iterative optimization process rather than a zero-shot task. Secondly, training using LangProp exhibits similar characteristics as training in deep learning; in deep learning, it is a well-studied problem that policies trained with imitation learning on offline datasets do not generalize to out-of-distribution online data. DAgger and reinforcement learning are two of the common ways of addressing this problem. Our results show that these training paradigms can also be effective when used in LangProp. 5.2 Demonstration of causal confusion when trained offline A common failure mode of offline trained models was that the agent remained stationary indefinitely until the timeout was reached. Upon inspection of the policy code that was generated, we were able to identify the failure to be a phenomenon known as causal confusion in imitation learning (De Haan et al., 2019). A snippet of code responsible for such failure in one of the runs is shown in Listing 3. This exemplifies the interpretability of LangProp models, allowing us to directly assess the source of failure. The code predicts 0 speed when the agent’s current speed is already close to 0. Note that this is not a failure of the LangProp algorithm, but due to such a policy maximizing the imitation Figure 3: Training curves for the different training methods of the LangProp agent. The training scores are evaluated on 1000 samples from the offline training dataset and/or online replay buffer, and the validation scores are evaluated on 1000 samples from the offline validation dataset. Updates are performed every 1000 frames of agent driving, as well as upon infractions in the RL setting. The score is in the range of $[-10, 1]$ due to exception penalties. We limit the axis to $[-1, 1]$ in the plots. learning objective on an offline dataset, bypassing the need to learn a more complex policy. This phenomenon is commonly researched in the context of deep imitation learning, and can be avoided by employing training on online data, e.g. using DAgger or RL. We believe our work to be the first to report a similar phenomenon using LLMs for policy optimization. Listing 3: Identifying causal confusion in the policy when trained purely offline 5.3 ANALYSIS OF TRAINING METHODS The use of online training samples alleviated the issue of causal confusion, leading to selecting policies where the agent has a sensible driving performance. This is because if the agent remains stationary, those samples will accumulate in the replay buffer, resulting in a lower priority for the causally confused policy. Comparing the results in Table 1 and the validation scores in Figure 3b, it seems that the scores on the offline dataset are not indicative of the agent’s driving performance. From the training scores on the replay buffer and/or offline dataset in Figure 3a, we see that the agents trained with RL on infractions have spikes corresponding to infractions. This is due to over-sampling infractions when they occur, allowing the policy update to immediately address the issue. DAgger has a milder response compared to training just on online data because the offline dataset does not include on-policy infractions. The higher rate of infractions in the training distribution may be why the online trained agent has a lower training score but has a higher driving performance. 6 CONCLUSION We presented LangProp, a framework that uses LLMs for data-driven code optimization, and demonstrated its capability of generating driving policies in CARLA. We showed that classical training paradigms such as imitation learning, DAgger, and reinforcement learning directly translate to training with LangProp, and the choices of the objective function and the training data distribution can be used to guide which policies are selected. Since numerous candidate solutions satisfy the code specification, automatically optimizing the code to maximize a given performance metric has been a key missing feature in few-shot code generation. The LangProp framework provides this feature by reformulating the machine learning training paradigm in the context of using LLMs as code optimizers and treating policy code as parameters of the model. We believe that the LangProp paradigm opens up many possibilities for data-driven machine learning with more interpretability and transparency. REPRODUCIBILITY STATEMENT We will open-source the code both for the general LangProp framework, as well as the code for training and evaluating the LangProp agent in CARLA. More details of the implementation and design decisions can be found in the appendices. For the ICLR 2024 conference submission, supplementary materials can be found at https://github.com/langprop-iclr24/LangProp/ which includes the code, pre-trained checkpoints using LangProp, videos of sample runs by the LangProp agent. We also include self-contained minimal examples of applying LangProp to tasks such as Sudoku and CartPole. REFERENCES Bassem Abu-Nasser. Medical expert systems survey. *International Journal of Engineering and Information Systems (IJEIS)*, 1(7):218–224, 2017. Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. *arXiv preprint arXiv:2204.01691*, 2022. Thomas Bäck and Hans-Paul Schwefel. An overview of evolutionary algorithms for parameter optimization. *Evolutionary computation*, 1(1):1–23, 1993. Claudine Badue, Rânik Guidolini, Raphael Vivaquqa Carneiro, Pedro Azevedo, Vinicius B Cardoso, Avelino Forechi, Luan Jesus, Rodrigo Berriel, Thiago M Paixao, Filipe Mutz, et al. Self-driving cars: A survey. *Expert Systems with Applications*, 165:113816, 2021. Mayank Bansal, Alex Krizhevsky, and Abhijit Ogale. Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. *arXiv preprint arXiv:1812.03079*, 2018. Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, and Karol Zieba. End to end learning for self-driving cars, 2016. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. CARLA. Carla autonomous driving leaderboard. https://leaderboard.carla.org/ 2020. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Paylov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code, 2021. Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, and Andreas Geiger. Transfuser: Imitation with transformer-based sensor fusion for autonomous driving. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2022.
LzPWWPAdY4
For the XSUM and GSM8k tasks, LoftQ gets better accuracy than full-precision LoRA. I wonder how the FP LoRA was tuned? Maybe 4 bit quantization does implicit regularization, and FP LoRA was not regularized well enough? This would especially make a difference if the tasks are low dimensional. In other words, if a high capacity LLAMA 13B model is fine-tuned LoRA style on GSM8k, how did the authors ensure that the model was not overfitted?
LOFTQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models Yixiao Li1 * Yifan Yu1 * Chen Liang1 Pengcheng He2 Nikos Karampatziakis2 Weizhu Chen2 Tuo Zhao1 ABSTRACT Quantization is an indispensable technique for serving Large Language Models (LLMs) and has recently found its way into LoRA fine-tuning (Dettmers et al., 2023). In this work we focus on the scenario where quantization and LoRA fine-tuning are applied together on a pre-trained model. In such cases it is common to observe a consistent gap in the performance on downstream tasks between full fine-tuning and quantization plus LoRA fine-tuning approach. In response, we propose LoftQ (LoRA-Fine-Tuning-aware Quantization), a novel quantization framework that simultaneously quantizes an LLM and finds a proper low-rank initialization for LoRA fine-tuning. Such an initialization alleviates the discrepancy between the quantized and full-precision model and significantly improves generalization in downstream tasks. We evaluate our method on natural language understanding, question answering, summarization, and natural language generation tasks. Experiments show that our method is highly effective and outperforms existing quantization methods, especially in the challenging 2-bit and 2/4-bit mixed precision regimes. The code is available on https://github.com/yxli2123/LoftQ. 1 INTRODUCTION The advent of Pre-trained Language Models (PLMs) has marked a transformative shift in the field of Natural Language Processing (NLP), offering versatile solutions across various applications (He et al., 2021b; Lewis et al., 2019; Touvron et al., 2023). They have showcased unparalleled proficiency in executing a variety of language tasks, including Natural Language Understanding (NLU) and Natural Language Generation (NLG). These models typically have millions or even billions of parameters, necessitating substantial computational and memory requirements. However, the extensive computational and memory demands of these models pose significant challenges, especially for deployments where resources are often constrained and need to be shared among many users. To mitigate the extensive storage requirements of pre-trained models, quantization serves as a pivotal compression technique (Zafrir et al., 2019; Shen et al., 2020; Bai et al., 2022; Dettmers et al., 2022), converting high-precision numerical values into a discrete set of values. Typically, model parameters, originally stored in a 16-bit float format, are transformed into a 4-bit integer format through quantization, resulting in a substantial 75% reduction in storage overhead. Additionally, to facilitate the adaptation of quantized pre-trained models to downstream tasks efficiently, Low-Rank Adaptation (LoRA) is a viable approach (Hu et al., 2021). This technique is a parameter-efficient fine-tuning method traditionally applied to high-precision pre-trained models. It is based on the hypothesis that the differences between fully fine-tuned weights and pre-trained weights exhibit low-rank properties. This allows these differences to be represented using low-rank matrices. As a result, the original pre-trained weights remain unaltered, with adaptations confined solely to these low-rank matrices, enabling effective task adaptation. *Equal contribution 1Li, Yu, Liang and Zhao are affiliated with Georgia Institute of Technology. Correspondence to yixiaoli@gatech.edu, yyu429@gatech.edu and tuzhao@gatech.edu 2He, Karampatziakis and Chen are affiliated with Microsoft Azure. When quantizing pre-trained models, practitioners often concentrate primarily on the quantization technique, inadvertently neglecting the importance of subsequent LoRA fine-tuning (Dettmers et al., 2023; Diao et al., 2023). For example, QLoRA inherits the fixup initialization (Zhang et al., 2019) used in LoRA, which (Dettmers et al., 2023) attaches zero initialized low-rank adapters (see Section 2.3) to the quantized pre-trained model. The inevitable discrepancy introduced by quantization during the approximation of the original high-precision numbers, a scenario particularly pronounced in low-bit situations such as the 2-bit regime, can adversely impact the initialization of LoRA fine-tuning. As illustrated in Figure 1a, the quantized pre-trained model obtained by QLoRA exhibits severe degradation below the 3-bit level. This deviation in initialization often results in an inferior fine-tuning performance. As illustrated in Figure 1b, the fine-tuning performance drops as the quantization bit decreases when applying QLoRA. Moreover, it is noteworthy that QLoRA fails below the 3-bit level. In this paper, we introduce a novel quantization framework, called **LoRA-Fine-Tuning-aware Quantization** (LoftQ). It is designed specifically for pre-trained models that require quantization and LoRA fine-tuning. This framework actively integrates low-rank approximation, working in tandem with quantization to jointly approximate the original high-precision pre-trained weights. This synergy significantly enhances alignment with the original pre-trained weights as illustrated in Figure 2. Consequently, our method provides an advantageous initialization point for subsequent LoRA fine-tuning, leading to improvements in downstream tasks. ![Figure 1](image) (a) Pre-trained LLAMA-2-13b on WikiText-2 (b) Fine-tuned LLAMA-2-13b on WikiText-2 Figure 1: QLoRA performance with different bits. **Left:** QLoRA initialization of LLAMA-2-13b on WikiText-2. **Right:** Apply QLoRA to LLAMA-2-13b on WikiText-2 language modeling task. Smaller perplexity indicates better performance. We evaluate our quantization framework by conducting extensive experiments on downstream tasks, such as NLU, question answering, summarization, and NLG. Experiments show that LoftQ consistently outperforms QLoRA across all precision levels. For instance, with 4-bit quantization, we achieve a 1.1 and 0.8 gain in Rouge-1 for XSum (Narayan et al., 2018) and CNN/DailyMail (Hermann et al., 2015), respectively. LoftQ excels particularly in low-bit scenarios and works effectively with different quantization methods. For example, we achieve over an 8% gain on MNLI (Wang et al., 2019) and more than 10% on SQuADv1.1 (Rajpurkar et al., 2016) with both 2-bit NormalFloat and the 2-bit uniform quantization. We have not seen our approach performs worse than QLoRA. ## 2 BACKGROUND ### 2.1 Transformer Models A transformer model contains a sequence of layers, where each layer consists of two sub-layers: a multi-head self-attention (MHA) and a fully connected feed forward network (FFN) (Vaswani et al., 2017). Given the input $X \in \mathbb{R}^{n \times d}$, where $n$ is the sequence length and $d$ is the hidden dimension of the model, MHA computes the $h$ attention heads in parallel: $$\text{MHA}(X) = \text{Concat}(\text{head}_1, ..., \text{head}_h)W_o,$$ where $\text{head}_i = \text{Softmax}(XW_{qi}(XW_{ki})^\top / \sqrt{d_h})XW_{vi}$ for $i = 1, ..., h$, where $W_{qi}, W_{ki}, W_{vi} \in \mathbb{R}^{d \times d_h}$ are query, key, and value matrices, $W_o \in \mathbb{R}^{d \times d}$ is the output matrix, and $d_h = d/h$. FFN comprises two linear transformations and an activation function, and is defined as $\text{FFN}(X) = \sigma(XW_{f1} + b_1)W_{f2} + b_2$, where $W_{f1} \in \mathbb{R}^{d \times d_m}$, $W_{f2} \in \mathbb{R}^{d_m \times d}$, and $\sigma(\cdot)$ is the activation function. A residual connection is used and followed by layer normalization. Figure 2: Initialization discrepancy between the LoRA initialization and the original pre-trained weight matrix, described by the spectral norm and Frobenius norm of the difference. The weight matrix in the above figures is randomly selected in BART-large. The initialization is obtained by QLoRA and LoftQ, with Uniform and NormalFloat quantization methods applied at both 2-bit and 4-bit levels. LoftQ successfully mitigates the discrepancy, especially at the 2-bit level. 2.2 Quantization Quantization. Given a high-precision number, e.g., such as 32-bit floating point number, $X_{\text{HP}} \in \mathbb{R}$, $N$-bit quantization encodes it to an integer $X_{\text{INT}} \in \{0, 1, ..., 2^N - 1\}$. This process can be expressed as $$X_{\text{INT}} = \text{round}\left((2^N - 1)F(X_{\text{HP}})\right),$$ where $F(\cdot): \mathbb{R} \mapsto [0, 1]$ is a normalization function. Uniform quantization assumes $F(X) = (X - X_{\text{min}})/(X_{\text{max}} - X_{\text{min}})$. Dettmers et al. (2023) proposes 4-bit NormalFloat Quantization (NF4). It assumes $X \sim N(0, \sigma^2)$ and hence $F(X) = \Phi(X/\sigma)$, where $\Phi(\cdot)$ is the cumulative distribution function of the standard normal distribution. Dequantization. A lookup table $T$, where $$T[i] = F^{-1}\left(\frac{i}{2^N - 1}\right), i = 0, 1, ..., 2^N - 1,$$ is used to decode the integer $X_{\text{INT}}$ to its simulated high-precision counterpart $X_D \in \mathbb{R}$. Therefore, the dequantization can be expressed as $$X_D = T[X_{\text{INT}}].$$ Simulated Quantization for Matrices. While it is possible to perform multiplication directly between quantized representations, it is common to apply simulated quantization for matrices (Bai et al., 2020; Shen et al., 2020). There, quantized weight matrices are stored as encoded integers in memory, and are temporarily dequantized to simulated high-precision matrices by the lookup table when engaged in multiplication operations. In simulated quantization, it is only necessary to analyze the map from a high-precision matrix to a simulated high-precision matrix. We denote this end-to-end process by $q_N(\cdot): \mathbb{R}^{m \times n} \mapsto \mathbb{R}_N^{m \times n}$, where $\mathbb{R}_N : \{T[i] \in \mathbb{R} | 0 \leq i < 2^N\}$. 2.3 Low-Rank Adaptation LoRA (Hu et al., 2021) updates two small weight matrices $A$ and $B$ that are attached to a frozen pre-trained weight matrix $W$. Hence, a linear transformation, $Y = XW$, is reformulated as $$Y = XW + XAB^\top,$$ where $X \in \mathbb{R}^{n \times d_1}, W \in \mathbb{R}^{d_1 \times d_2}, A \in \mathbb{R}^{d_1 \times r}, B \in \mathbb{R}^{d_2 \times r}$, and $r \ll \min\{d_1, d_2\}$. Initially, $$A \sim N(0, \sigma^2), \quad B = 0,$$ so as to align to the pre-trained weights. During the fine-tuning, $W$ is fixed while $A$ and $B$ are updated by some SGD-type optimization method. It is worth noting that if low-rank adapters $A$ and $B$ are attached to a quantized backbone $Q = q_N(W)$ and are initialized by (5), the starting weight $Q + AB^\top$ is no longer equal to the pre-trained weight $W$ due to the discrepancy introduced by the quantization. 3 METHOD We propose LoRA-Fine-Tuning-aware Quantization (LoftQ), a quantization framework for LLMs. It alternatively applies quantization and low-rank approximation to approximate original pre-trained weights. This quantization framework provides a promising initialization for LoRA fine-tuning, which alleviates the quantization discrepancy in QLoRA and improves generalization in downstream tasks significantly. 3.1 LORA-AWARE QUANTIZATION We use an $N$-bit quantized weight $Q \in \mathbb{R}_N^{d_1 \times d_2}$ and low-rank approximations $A \in \mathbb{R}^{d_1 \times r}, B \in \mathbb{R}^{d_2 \times r}$ to approximate the original high-precision pre-trained weight $W \in \mathbb{R}^{d_1 \times d_2}$ as the initialization of LoRA fine-tuning. Specifically, before fine-tuning, we initialize the network by minimizing the following objective: $$\min_{Q,A,B} \| W - Q - AB^\top \|_F,$$ where $\| \cdot \|_F$ denotes the Frobenious norm. This objective in (6) takes LoRA fine-tuning into consideration by jointly optimizing the initial values of the quantized backbone $Q$ and low-rank adapters $A, B$. Contrarily, practitioners typically convert the pre-trained weight $W$ into a quantized weight $Q$ outright, neglecting the subsequent LoRA fine-tuning process. This oversight leads to notable performance degradation in downstream tasks arising from the quantization discrepancy. 3.2 ALTERNATING OPTIMIZATION We solve the minimization problem in (6) by alternating between quantization and singular value decomposition (SVD). To begin with, we set $A_0$, and $B_0$ equal to 0. Quantization. At the $t$-th step, we quantize the difference between the original pre-trained weight matrix $W$ and the low-rank approximation $A_{t-1}B_{t-1}^\top$ from the previous step to obtain the quantized weight matrix $Q_t$ by $$Q_t = q_N(W - A_{t-1}B_{t-1}^\top),$$ where $q_N(\cdot)$ maps a high-precision weight matrix to a quantized matrix. We remark that our algorithm is compatible with different quantization functions $q_N(\cdot)$. We apply NF4 and the uniform quantization in Section 4 as examples. We also remark that $Q_t$ is not an exact solution of the minimization in (6), given the fixed $A_{t-1}B_{t-1}^\top$, but it is an efficient approximation. SVD. After obtaining the $t$-th quantized weight $Q_t$, SVD is applied to the residual of the quantization denoted by $R_t = W - Q_t$ by $$R_t = \sum_{i=1}^d \sigma_{t,i} u_{t,i} v_{t,i}^\top,$$ where $d = \min\{d_1, d_2\}$, $\sigma_{t,1} \geq \sigma_{t,2} \geq \ldots \geq \sigma_{t,d}$ are the singular values of $R_t$, $u_{t,i}$’s and $v_{t,i}$’s are the associated left and right singular vectors of $R_t$. We then obtain a rank-$r$ approximation of $R_t$ by $A_tB_t^\top$, where $$A_t = [\sqrt{\sigma_{t,1}} u_{t,1}, \ldots, \sqrt{\sigma_{t,r}} u_{t,r}],$$ $$B_t = [\sqrt{\sigma_{t,1}} v_{t,1}, \ldots, \sqrt{\sigma_{t,r}} v_{t,r}].$$ We summarize our method in Algorithm 1. It is worth noting that $T = 1$ is a special case where $Q_1$ is the exact quantized weight obtained by QLoRA, and low-rank approximations $A_1, B_1$ are obtained by the SVD of the quantization residual $W - Q_1$. $T = 1$ is sufficient to mitigate the quantization discrepancy, and alternating optimization helps to find a closer initialization to the pre-trained weight $W$, which further improves the performance (see Section 5). We remark that the computational cost of LoftQ is negligible because it is applied to individual weight matrices and can be executed in parallel. We also remark one can apply LoftQ only once to a pre-trained model and reuse the initialization obtained by LoftQ for different downstream tasks. 3.3 APPLYING TO LORA FINE-TUNING We store the $Q_T \in \mathbb{R}_N^{d_1 \times d_2}$ obtained by LoftQ using an integer matrix $M$ by (1) and a lookup table $T$ by (2). We initialize the backbone with the integer matrix $M$ and initialize the low-rank adapters with $A_T, B_T$ obtained by LoftQ. Algorithm 1 LoftQ input Pre-trained weight $W$, target rank $r$, $N$-bit quantization function $q_N(\cdot)$, alternating step $T$ 1: Initialize $A_0 \leftarrow 0$, $B_0 \leftarrow 0$ 2: for $t = 1$ to $T$ do 3: Obtain quantized weight $Q_t \leftarrow q_N(W - A_{t-1}B_{t-1}^\top)$ 4: Obtain low-rank approximation $A_t, B_t \leftarrow \text{SVD}(W - Q_t)$ by (9) 5: end for output $Q_T, A_T, B_T$ During LoRA fine-tuning, we freeze the integer weight $M$ and optimize the low-rank adapters with an efficient optimization algorithm, e.g., AdamW (Loshchilov & Hutter, 2017). In forward propagation, the integer weight $M$ is temporarily dequantized to the simulated high-precision weight $Q_T$ by its lookup table, as described in (5). In back propagation, gradients and optimizer state are only related to low-rank adapters $A, B$, which reduces considerable training cost. 4 EXPERIMENTS We evaluate our method on NLU and NLG tasks. We apply LoftQ for quantizing DeBERTaV3-base (He et al., 2021b), BART-large (Lewis et al., 2019), and LLAMA-2 series (Touvron et al., 2023). Implementation Details. Following the prior works of LoRA variants (Zhang et al., 2023; He et al., 2021a), we freeze all the backbone weight matrices and add low-rank adapters to weight matrices in MHA and FFN of all layers. We quantize the weight matrices that are attached by low-rank adapters. All the quantized models and adapters used in this paper are available on https://huggingface.co/LoftQ. Our implementation is based on publicly available Huggingface Transformers code-base (Paszke et al., 2019). All the experiments are conducted on NVIDIA A100 GPUs. Quantization Methods. We apply two quantization methods to demonstrate LoftQ is compatible with different quantization functions: - Uniform quantization is a classic quantization method. It uniformly divides a continuous interval into $2^N$ categories and stores a local maximum absolute value for dequantization. - NF4 and its 2-bit variant NF2 are quantization methods used in QLoRA (Dettmers et al., 2023). They assume that the high-precision values are drawn from a Gaussian distribution and map these values to discrete slots that have equal probability. We perform 2-bit and 4-bit quantization on all models, achieving compression ratios of 25-30% and 15-20% at the 4-bit and 2-bit levels, respectively. The compression ratios and trainable parameter ratios for all models are detailed in the Appendix A. Baselines. We compare LoftQ with the following baseline methods: - Full fine-tuning is the most common approach for adapting a pre-trained model to downstream tasks. The model is initialized with pre-trained weights and all parameters are updated through an SGD-type optimization method. - Full precision LoRA (LoRA) is a lightweight method for task adaptation, where it stores the backbone using 16-bit numbers and optimizes the low-rank adaptors only. The adaptors are applied to the same matrices as in LoftQ. - QLoRA is similar to LoRA except the backbone is quantized into low-bit regime. The low-rank adapters are initialized using (5) and are applied to the same matrices as in LoftQ. 4.1 ENCODER-ONLY MODEL: DEBERTAV3 Models and Datasets. We quantize the DeBERTaV3-base (He et al., 2021b) with LoftQ, then fine-tune and evaluate the model on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019), SQuADv1.1 (Rajpurkar et al., 2016), and ANLI (Nie et al., 2019). The specific tasks of GLUE are given in Appendix C. Following previous works (Zhang et al., 2023), we exclude WNLI in the experiments. Implementation Details. We select the learning rates from \(\{1 \times 10^{-5}, 5 \times 10^{-5}, 1 \times 10^{-4}, 5 \times 10^{-4}\}\). We quantize the entire backbone. Given that GLUE, SQuADv1.1, and ANLI are relatively easy NLU tasks, we also quantize the embedding layer for higher compression efficiency. We apply the NormalFloat and the uniform quantization for LoftQ and QLoRA at both 2-bit and 4-bit levels. We use rank 16 and 32 for low-rank adapters. More implementation details, such as the training epochs and batch sizes, are presented in Appendix D.2. Main Results. Table 1 and Table 2 summarize the results for 2-bit quantization on the GLUE, SQuADv1.1, and ANLI datasets, by NF2 and the uniform quantization, respectively. Our method consistently outperforms QLoRA on all settings with respect to different ranks, quantization methods, and datasets. When using the uniform quantization (Table 2), our method achieves 88.0% accuracy on MNLI-m, surpassing the QLoRA baseline by 8%. For tasks like SST and SQuADv1.1, our method even approaches the full fine-tuning performance at 2-bit level. The 4-bit quantization experiment results are presented in Appendix D.1 as both LoftQ and QLoRA achieve performance close to full fine-tuning. Table 1: Results with 2-bit LoftQ of DeBERTaV3-base models on GLUE development set, SQuADv1.1 development set, ANLI test set using NF2 quantization. We report the median over four seeds. N.A. indicates the model does not converge. The best results on each dataset are shown in bold. | Rank | Method | MNLI m/mm | QNLI Acc | RTE Acc | SST Acc | MRPC Acc | CoLA Matt | QQP Acc | STSB P/S Corr | SQuAD EM/F1 | ANLI Acc | |------|--------|-----------|----------|---------|---------|----------|-----------|---------|--------------|-------------|---------| | - | Full FT| 90.5/90.6 | 94.0 | 82.0 | 95.3 | 89.5/93.3| 69.2 | 92.4/89.8| 91.6/91.1 | 88.5/92.8 | 59.8 | | 16 | LoRA | 90.4/90.5 | 94.6 | 85.1 | 95.1 | 89.9/93.6| 69.9 | 92.0/89.4| 91.7/91.1 | 87.3/93.1 | 60.2 | | 16 | QLoRA | 75.4/75.6 | 82.4 | 55.9 | 86.5 | 73.8/82.8| N.A. | 86.8/82.3| 83.0/82.8 | 61.5/71.2 | N.A. | | | LoftQ | 84.7/85.1 | 86.6 | 61.4 | 90.2 | 83.8/88.6| 37.4 | 90.3/86.9| 87.1/86.9 | 81.5/88.6 | 47.1 | | 32 | QLoRA | 78.5/78.7 | 80.4 | 56.7 | 86.9 | 73.8/82.7| N.A. | 87.1/82.7| 83.6/83.3 | 64.6/73.8 | N.A. | | | LoftQ | 86.0/86.1 | 89.9 | 61.7 | 92.0 | 83.6/87.2| 47.5 | 91.0/87.9| 87.5/87.0 | 82.9/89.8 | 49.0 | Table 2: Results with 2-bit LoftQ of DeBERTaV3-base models on GLUE development set, SQuADv1.1 development set using Uniform quantization. We report the median over four seeds. N.A. indicates the model does not converge. The best results on each task are shown in bold. | Rank | Method | MNLI m/mm | QNLI Acc | RTE Acc | SST Acc | MRPC Acc | CoLA Matt | QQP Acc | STSB P/S Corr | SQuAD EM/F1 | |------|--------|-----------|----------|---------|---------|----------|-----------|---------|--------------|-------------| | - | Full FT| 90.5/90.6 | 94.0 | 82.0 | 95.3 | 89.5/93.3| 69.2 | 92.4/89.8| 91.6/91.1 | 88.5/92.8 | | 16 | LoRA | 90.4/90.5 | 94.6 | 85.1 | 95.1 | 89.9/93.6| 69.9 | 92.0/89.4| 91.7/91.1 | 87.3/93.1 | | 16 | QLoRA | 76.5/76.3 | 83.8 | 56.7 | 86.6 | 75.7/84.7| N.A. | 87.1/82.6| 83.5/83.4 | 69.5/77.6 | | | LoftQ | 87.3/87.1 | 90.6 | 61.1 | 94.0 | 87.0/90.6| 59.1 | 90.9/88.0| 87.9/87.6 | 84.4/91.2 | | 32 | QLoRA | 79.9/79.5 | 83.7 | 57.8 | 86.9 | 76.5/84.5| N.A. | 88.6/84.7| 84.1/84.0 | 71.6/80.2 | | | LoftQ | 88.0/88.1 | 92.2 | 63.2 | 94.7 | 87.5/91.2| 60.5 | 91.3/88.3| 89.5/89.2 | 85.2/91.6 | Our method is also more stable compared to QLoRA in the low-bit regime. For instance, while QLoRA fails to converge on CoLA for both quantization methods and ranks, LoftQ converges in all cases and achieves a score of 60.5 using uniform quantization at rank 32. LoftQ stands out in its ability to consistently attain robust and improved performance by effectively preserving the starting point of pre-trained weights. 4.2 Encoder-Decoder Model: BART Models and Datasets. We quantize BART-large model (Lewis et al., 2020) with LoftQ, then fine-tune and evaluate the model on two commonly used summarization datasets: XSum (Narayan et al., 2018) and CNN/DailyMail (Hermann et al., 2015). Implementation Details. We apply LoftQ to weight matrices in MHA and FFN of both encoder and decoder layers. We report ROUGE 1/2/L scores, which are the metrics for summarization tasks (Lin, 2004). We conduct quantization experiments in both 2-bit and 4-bit scenarios. We experiment with both NormalFloat and the uniform quantization in both 2-bit and 4-bit scenarios. In each precision, we choose rank equal to 8 and 16 for a fair comparison with the full precision LoRA baseline (Zhang et al., 2023). Please see Appendix E for detailed configurations. Main Results. Table 3 summarizes our 4-bit quantization experiment results on the XSum and CNN/DailyMail test sets. Our method consistently outperforms QLoRA at both ranks on both datasets. It even surpasses full precision LoRA at both ranks on Xsum. We will discuss this unexpected results in Section 5. The 2-bit quantization results are shown in Table 4. Our observation is consistent with the NLU experiments, that LoftQ demonstrates the convergence to reasonable results, while QLoRA does not converge. This indicates our method is robuster by narrowing the initialization gap. Table 3: Results with 4-bit LoftQ of BART-large on XSum and CNN/DailyMail. We report ROUGE-1/2/L. Lead-3 means choosing the first 3 sentences as the summary. N.A. indicates the model does not converge. Full FT: full fine-tuning. We report the median over five seeds. | Quantization | Rank | Method | XSum | CNN/DailyMail | |--------------|------|--------|------|---------------| | | - | Lead-3 | 16.30/1.60/11.95 | 40.42/17.62/36.67 | | | | Full FT | 45.14/22.27/37.25 | 44.16/21.28/40.90 | | Full Precision | 8 | LoRA | 43.40/20.20/35.20 | 44.72/21.58/41.84 | | | 16 | LoRA | 43.95/20.72/35.68 | 45.03/21.84/42.15 | | | 8 | QLoRA | 42.91/19.72/34.82 | 43.10/20.22/40.06 | | | | LoftQ | **44.08/20.72/35.89** | **43.81/20.95/40.84** | | NF4 | 16 | QLoRA | 43.29/20.05/35.15 | 43.42/20.62/40.44 | | | | LoftQ | **44.51/21.14/36.18** | **43.96/21.06/40.96** | | Uniform | 8 | QLoRA | 41.84/18.71/33.74 | N.A. | | | | LoftQ | **43.86/20.51/35.69** | **43.73/20.91/40.77** | | | 16 | QLoRA | 42.45/19.36/34.38 | 43.00/20.19/40.02 | | | | LoftQ | **44.29/20.90/36.00** | **43.87/20.99/40.92** | Table 4: Results with 2-bit LoftQ of BART-large on XSum and CNN/DailyMail using NF2 quantization. N.A. indicates the model does not converge. We report ROUGE-1/2/L, the higher the better. We report the median over five seeds. | Rank | Method | XSum | CNN/DailyMail | |------|--------|------|---------------| | 8 | QLoRA | N.A. | N.A. | | | LoftQ | 39.63/16.65/31.62 | 42.24/19.44/29.04 | | 16 | QLoRA | N.A. | N.A. | | | LoftQ | 40.81/17.85/32.80 | 42.52/19.81/39.51 | 4.3 Decoder-only Model: LLAMA-2 Models and Datasets. We quantize LLAMA-2-7b and LLAMA-2-13b (Touvron et al., 2023) with LoftQ. We then fine-tune and evaluate the models on two NLG datasets: GSM8K (Cobbe et al., 2021) and WikiText-2 (Merity et al., 2016). Please see Appendix F for more details about the datasets. Implementation Details. Similarly, we apply LoftQ to weight matrices in MHA and FFN of all layers. In WikiText-2 evaluation, we report perplexity. In case of overfitting, we apply weight decay to low-rank adapters for all settings. In GSM8K evaluation, we extract numerical answers in the generated solutions and then calculate the accuracy using those numerical answers. We conduct experiments with both NF2 and NF4. Please see Appendix F for detailed configurations. Main Results. Table 5 presents a summary of our experiments on LLAMA-2-7b and LLAMA-2-13b using 2-bit, 4-bit, and mixed-precision NormalFloat quantization methods on WikiText-2 and GSM8K datasets. In WikiText-2, our method consistently outperforms QLoRA across all quantization precision settings on both models. When dealing with the challenging 2-bit precision, where QLoRA fails to converge, LoftQ manages to achieve a perplexity of 7.85. In GSM8K, our method achieves better or on par performance compared to QLoRA across different model sizes and quantization precision levels. For example, our method achieves 26.5% accuracy using 2-bit precision of LLAMA-2-7b, where QLoRA does not converge. To provide a customized trade-off between the performance and precision, we also explore mixed-precision (equivalent to 3 bits) quantization where matrices in the first half layers are quantized using 4 bits, and the rest matrices remain 2 bits. We witness a remarkable 4.1% accuracy boost on the GSM8K dataset using LLAMA-2-7b and a 4.7% boost using LLAMA-2-13b. This result underscores the potential of LoftQ for complex mixed-precision quantization scenarios. Table 5: Results of LoftQ using NormalFloat for LLAMA-2 series on WikiText-2 and GSM8K. 3/2.5/2.25-bit indicates mixed-precision quantization: 4-bit precision for the first 16/8/4 layers and 2-bit precision for the rest of layers. We report the perplexity (the smaller the better) for WikiText-2 and accuracy for GSM8K. The rank of low-rank adapters is 64. N.A. indicates the model does not converge. We report the median over five random seeds. | Method | Bit | LLAMA-2-7b WikiText-2↓ | GSM8K↑ | LLAMA-2-13b WikiText-2↓ | GSM8K↑ | |--------|-----|-----------------------|--------|------------------------|--------| | LoRA | 16 | 5.08 | 38.5 | 5.12 | 48.8 | | QLoRA | 4 | 5.70 | **38.2** | 5.22 | 48.8 | | LoftQ | 4 | **5.24** | 38.0 | **5.16** | **49.1** | | QLoRA | 3 | 5.73 | 32.1 | 5.22 | 40.7 | | LoftQ | 3 | **5.63** | **36.2** | **5.13** | **45.4** | | QLoRA | 2.5 | N.A. | N.A. | 19.39 | N.A. | | LoftQ | 2.5 | 5.78 | **31.1** | 5.22 | **41.1** | | QLoRA | 2.25| N.A. | N.A. | N.A. | N.A. | | LoftQ | 2.25| **6.13** | **27.5** | **5.45** | **38.1** | | QLoRA | 2 | N.A. | N.A. | N.A. | N.A. | | LoftQ | 2 | **7.85** | **26.5** | **7.69** | **33.4** | 4.4 ANALYSIS Effectiveness of Alternating Optimization. We conduct experiments with different alternating step $T$ to verify the effectiveness of the alternating optimization and to find the best value $T$ as a hyperparameter for different models. Across all tasks and models, we observed that alternating optimization yields substantial improvements even with a minimal alternating step. This suggests that it rapidly narrows the discrepancy between quantized weights and pre-trained weights, making our method easy to apply. For example, LoftQ achieves 21.14 Rouge-2 score on XSum using only 1 step. Interestingly, we noticed that increasing the alternating step beyond a certain point tends to result in diminishing returns. We suspect this phenomenon occurs because, as the gap becomes smaller, it becomes more challenging for alternating optimization to consistently minimize the gap at each step. This challenge emerges because of the inherent errors introduced by the quantization method. Nevertheless, results from Figure 3 indicate our method is not sensitive to the alternating step $T$ and is able to consistently enhance downstream fine-tuning performance. Figure 3: Comparison of different alternating step $T$ used in LoftQ. $T = 0$ indicates we use QLoRA method that initializes low-rank adapters by (5). $T = 1, 5, 10$ indicates we use different $T$ for LoftQ described in Algorithm 1. Left: Uniform 2-bit DeBERTaV3-base. Middle: NF2 2-bit LLAMA-2-13b. Right: NF4 BART-large. 5 DISCUSSION Start with quantization or SVD in the alternating optimization? An alternative algorithm to the alternating optimization is that we first obtain the low-rank approximation $A_t$, $B_t$ and then obtain the quantized weight $Q_t$ by switching Line 3 and Line 4 in Algorithm 1. We note this is a valid alternative method as both still jointly minimize the objective in (6). Table 6 summarizes the performance of this alternative method. It is noteworthy that the alternative method still outperforms QLoRA significantly, even though it is worse than the primary version. This observation underscores the potential for performance improvement by achieving a closer approximation of pre-trained weights within the low-precision regime. LoftQ better than Full-precision LoRA? We find LoftQ outperforms full precision LoRA in XSum and GSM8K (see Table 3 and Table 5). Beside the overfitting caused by lack of regularization, another possible explanation for this unexpected phenomenon is that the initial low-rank adapters obtained by LoftQ are non-zero while they are all zero in full precision LoRA as described in (5). Such zero initialization could make the fine-tuning unstable, and therefore it performs worse than LoftQ. We leave the study of the robustness of LoftQ as future work. Table 6: Results of 2-bit uniformly quantized DeBERTaV3-base on part of GLUE. LoftQ(SVD First) indicates the alternative LoftQ that switches Line 3 and Line 4 in Algorithm 1. We report the median over four random seeds. The best results on each task are shown in bold. | Method | Rank | MNLI m / mm | QNLI Acc | SST2 Acc | |----------------------|------|-------------|----------|----------| | Full FT | - | 90.5/90.6 | 94.0 | 95.3 | | QLoRA | 32 | 79.9/79.5 | 83.8 | 86.6 | | LoftQ(SVD First) | 32 | 87.8/87.7 | 84.9 | 89.7 | | LoftQ(Quantization First) | 32 | 88.0/88.1 | 92.2 | 94.7 | 6 RELATED WORK Quantization-Aware Training (QAT) is often used to obtain quantized models that are adapted in downstream tasks [Peri et al., 2020] [Liu et al., 2023]. It involves quantization and full model fine-tuning at the same time. However, QAT requires massive training cost, such as the gradient and optimization state. Moreover, it is difficult to compute the gradient of quantized weights. Our method, with the help of LoRA, sidesteps the aforementioned issues, providing a light approach for downstream task adaptation. Post-Training Quantization (PTQ) is a category of popular quantization frameworks [Frantar et al., 2022] [Xiao et al., 2023], which can also be used for task adaptation. It calibrates the high-precision model with a small subset of the training dataset. Therefore, the subsequent quantization is guided by the training dataset, providing task-specific quantized models. Besides, it does not involve any gradient backpropagation, so it is cost-efficient. However, it usually results in lower accuracy compared to QAT. 7 CONCLUSION We propose LoftQ, a quantization framework for LLMs, which alternatively applies quantization and low-rank approximation to the original high-precision pre-trained weights, to obtain an initialization for the subsequent LoRA fine-tuning. Experiments on natural language understanding, question answering, summarization, and natural language generation show that our framework remarkably surpasses existing methods, e.g., QLoRA, for quantizing encoder-only, encoder-decoder, and decoder-only models. We have not observed our method exhibiting worse performance over QLoRA. Moreover, our quantization framework demonstrates effectiveness and robustness particularly in low-bit quantization regimes, e.g., the 2-bit level. REFERENCES Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jing Jin, Xin Jiang, Qun Liu, Michael Lyu, and Irwin King. Binarybert: Pushing the limit of bert quantization. *arXiv preprint arXiv:2012.15701*, 2020. Haoli Bai, Lu Hou, Lifeng Shang, Xin Jiang, Irwin King, and Michael R Lyu. Towards efficient post-training quantization of pre-trained language models. *Advances in Neural Information Processing Systems*, 35:1405–1418, 2022. Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. The second pascal recognising textual entailment challenge. 2006. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. The fifth pascal recognizing textual entailment challenge. In *TAC*, 2009. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In *Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)*, pp. 1–14, Vancouver, Canada, August 2017. Association for Computational Linguistics. doi: 10.18653/v1/S17-2001. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworko, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. *arXiv preprint arXiv:2110.14168*, 2021. Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In *Machine Learning Challenges Workshop*, 2007. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. *arXiv preprint arXiv:2208.07339*, 2022. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. *arXiv preprint arXiv:2305.14314*, 2023. Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, and Tong Zhang. Lmflow: An extensible toolkit for finetuning and inference of large foundation models. *arXiv preprint arXiv:2306.12420*, 2023. William B. Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In *Proceedings of the Third International Workshop on Paraphrasing (IWP2005)*, 2005. Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. Gptq: Accurate post-training quantization for generative pre-trained transformers. *arXiv preprint arXiv:2210.17323*, 2022. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. The third PASCAL recognizing textual entailment challenge. In *Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing*, pp. 1–9, Prague, June 2007. Association for Computational Linguistics. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. Towards a unified view of parameter-efficient transfer learning. *arXiv preprint arXiv:2110.04366*, 2021a. Pengcheng He, Jianfeng Gao, and Weizhu Chen. Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing. *arXiv preprint arXiv:2111.09543*, 2021b. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. *Advances in neural information processing systems*, 28, 2015. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. *arXiv preprint arXiv:2106.09685*, 2021. Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In *Thirteenth international conference on the principles of knowledge representation and reasoning*, 2012.
1VcKvdYbUM
I also think the definition of AP in abstract is confusing: it says that AP is “a method of poisoning data by injecting imperceptible perturbations to prevent its use in model training”; my question: why do you even add such points to data?
ABSTRACT The efficacy of availability poisoning, a method of poisoning data by injecting imperceptible perturbations to prevent its use in model training, has been a hot subject of investigation. Previous research suggested that it was difficult to effectively counteract such poisoning attacks. However, the introduction of various defense methods has challenged this notion. Due to the rapid progress in this field, the performance of different novel methods cannot be accurately validated due to variations in experimental setups. To further evaluate the attack and defense capabilities of these poisoning methods, we have developed a benchmark — APBench for assessing the efficacy of adversarial poisoning. APBench consists of 9 state-of-the-art availability poisoning attacks, 9 defense algorithms, and 4 conventional data augmentation techniques. We also have set up experiments with varying different poisoning ratios, and evaluated the attacks on multiple datasets and their transferability across model architectures. We further conducted a comprehensive evaluation of 2 additional attacks specifically targeting unsupervised models. Our results reveal the glaring inadequacy of existing attacks in safeguarding individual privacy. APBench is open source and available to the deep learning community. 1 INTRODUCTION Recent advancements of deep neural networks (DNNs) [21, 37, 15] heavily rely on the abundant availability of data resources [5, 33, 19]. However, the unauthorized collection of large-scale data through web scraping for model training has raised concerns regarding data security and privacy. In response to these concerns, a new paradigm of practical and effective data protection methods has emerged, known as availability poisoning attacks (APA) [40, 45, 9, 17, 43, 10, 32, 14, 36, 8, 44, 14, 32], or unlearnable example attacks. These poisoning methods inject small perturbations into images that are typically imperceptible to humans, in order to hinder the model’s ability to learn the original features of the images. Recently, the field of deep learning has witnessed advancements in defense strategies [23, 30, 7, 17] that hold the potential to challenge APAs, thereby undermining their claimed effectiveness and robustness. These defenses reveal the glaring inadequacy of existing APAs in safeguarding individual privacy in images. Consequently, we anticipate an impending arms race between attack and defense strategies in the near future. However, evaluating the performance of these new methods across diverse model architectures and datasets poses a significant challenge due to variations in experimental settings of recent literatures. In addition, researchers face the daunting task of staying abreast of the latest methods and assessing the effectiveness of various competing attack-defense combinations. This could greatly hamper the development and empirical exploration of novel attack and defense strategies. To tackle this challenge, we propose the APBench, a benchmark specifically designed for availability poisoning attacks and defenses. It involves implementing poisoning attack and defense mechanisms under standardized perturbations and training hyperparameters, in order to ensure fair and reproducible comparative evaluations. APBench comprises a range of availability poisoning attacks and defense algorithms, and commonly-used data augmentation policies. This comprehensive suite allows us to evaluate the effectiveness of the poisoning attacks thoroughly. Our contributions can be summarized as follows: 1Link to follow • An open source benchmark for state-of-the-art availability poisoning attacks and defenses, including 9 supervised and 2 unsupervised poisoning attack methods, 9 defense strategies and 4 common data augmentation methods. • We conduct a comprehensive evaluation competing pairs of poisoning attacks and defenses. • We conducted experiments across 4 publicly available datasets, and also extensively examined scenarios of partial poisoning, increased perturbations, the transferability of attacks to 4 CNN and 2 ViT models under various defenses, and unsupervised learning. We provide visual evaluation tools such as t-SNE, Shapley value map and Grad-CAM to qualitatively analyze the impact of poisoning attacks. The aim of APBench is to serve as a catalyst for facilitating and promoting future advancements in both availability poisoning attack and defense methods. By providing a platform for evaluation and comparison, we aspire to pave the way for the development of future availability poisoning attacks that can effectively preserve utility and protect privacy. 2 RELATED WORK 2.1 AVAILABILITY POISONING ATTACKS Availability poisoning attacks (APAs) belong to a category of data poisoning attacks [12] that adds a small perturbation to images, that is often imperceptible to humans. However, the objective contrasts with that of traditional data poisoning. The purpose of these perturbations is to protect individual privacy from deep learning algorithms, preventing DNNs from effectively learning the features present in the images. The attacker’s goal is to thus render their data unlearnable with perturbations, hindering the unauthorized trainer from utilizing the data to learn models that can generalize effectively to the original data distribution. The intent of APAs is therefore benign rather than malicious as generally assumed of data poisoning attacks. We typically assume that the attacker publishes (a subset of) the images, which get curated and accurately labeled by the defender to train on them without consent from the attacker. Formally, consider a source dataset comprising original examples \( D_{\text{clean}} = \{(x_1, y_1), \ldots, (x_n, y_n)\} \) where \( x_i \in X \) denotes an input image and \( y_i \in Y \) represents its label. The objective of the attacker is thus to construct a set of availability perturbations \( \delta \), such that models trained on the set of availability poisoned examples \( D_{\text{poi}}(\delta) = \{(x + \delta_x, y) \mid (x, y) \in D_{\text{clean}}\} \) are expected to perform poorly when evaluated on a test set \( D_{\text{test}} \) sampled from the distribution \( S \): \[ \max_{\delta} \mathbb{E}_{(x_i, y_i) \sim D_{\text{test}}} [\mathcal{L}(f_{\theta^*}(\delta)(x_i), y_i)], \quad \text{s.t.} \quad \theta^*(\delta) = \arg\min_{\theta} \mathbb{E}_{(\tilde{x}_i, \tilde{y}_i) \sim D_{\text{poi}}(\delta)} \mathcal{L}(f_{\theta}(\tilde{x}_i), \tilde{y}_i), \] where \( \mathcal{L} \) denotes the loss function, usually the softmax cross-entropy loss. In order to limit the impact on the original utility of images, the perturbation \( \delta_i \) is generally constrained within a small \( \epsilon \)-ball of \( \ell_p \) distance. To enforce a small perturbation budget, recent methods typically constrain their perturbations within a small \( \ell_p \)-ball of \( \epsilon \) radius, where typically \( p \in \{0, 2, \infty\} \). DeepConfuse (DC) [8] proposes to use autoencoders to generate training-phase adversarial perturbations. Neural tangent generalization attacks (NTGA) [45] approximates the target model as a Gaussian process [18] using the generalized neural tangent kernel, and solves a bi-level optimization for perturbations. Error-minimizing attacks (EM) [17] minimizes the training error of the perturbed images relative to their original labels on the target model, creating shortcuts for the data to become “unlearnable” by the target model. Building upon EM, robust error-minimizing attacks (REM) [10] use adversarially trained models to generate perturbations in order to counter defense with adversarial training. Hypocritical [40] also generates error-minimizing perturbations similar to EM, but instead uses a pretrained surrogate model. Targeted adversarial poisoning (TAP) [9], inspired by [26], found adversarial examples could be used for availability poisoning. In contrast to the above approaches, indiscriminate poisoning (UCL) [14] and transferable unlearnable examples (TUE) [32] instead consider availability poisoning for unsupervised learning. On the other hand, \( \ell_2 \) and \( \ell_0 \) perturbation-based poisoning methods do not require a surrogate model. They achieve poisoning by searching for certain triggering patterns to create shortcuts in the network. Besides the above \( \ell_\infty \)-bounded methods, Linear-separable poisoning (LSP) [44] and Autoregressive Poisoning (AR) [36] both prescribe perturbations within an $\ell_2$ perturbation budget. Specifically, LSP generates randomly initialized linearly separable color block perturbations, while AR fills the starting rows and columns of each channel with Gaussian noise and uses an autoregressive process to fill the remaining pixels, generating random noise perturbations. One Pixel Shortcut [43] (OPS), as an $\ell_0$-bounded poisoning method, perturbs only a single pixel in the training image to achieve strong poisoning in terms of usability. Figure 1 provides visual examples of these attacks. ![Visualizations of unlearnable CIFAR-10 images with corresponding perturbations. Perturbations are normalized for visualization.](image) **Figure 1:** Visualizations of unlearnable CIFAR-10 images with corresponding perturbations. Perturbations are normalized for visualization. ### 2.2 Availability Poisoning Defenses The goal of the defender is to successfully train a model with good generalization abilities (e.g., test accuracies on natural unseen images) on protected data. Generally, the defender can control the training algorithm, and only have access to a training data set with data poisoned either partially or fully. The objective of the defender is thus to find a novel training algorithm $g(D_{\text{poi}})$ that trains models to generalize well to the original data distribution: $$\min_g \mathbb{E}_{(x_i, y_i) \sim D_{\text{test}}} [\mathcal{L}(f_{\theta^*}(x_i), y_i)], \quad \text{s.t. } \theta^* = g(D_{\text{poi}}).$$ (2) Notably, if the method employs the standard training loss but performs novel image transformations $h$, then $g$ can be further specialized as follows: $$g(D_{\text{poi}}) = \arg\min_\theta \mathbb{E}_{(\hat{x}_i, y_i) \sim D_{\text{poi}}(\delta)} \mathcal{L}(f_\theta(h(\hat{x}_i)), y_i).$$ (3) Currently, defense methods against perturbative availability poisoning can be mainly classified into two categories: preprocessing and training-phase defenses. Data preprocessing methods preprocess the training images to eliminate the poisoning perturbations prior to training. Image shortcuts squeezing (ISS) [23] consists of simple countermeasures based on image compression, including grayscale transformation, JPEG compression, or bit-depth reduction (BDR) to perform poison removal. Recently, AVATAR [7] leverages the method proposed in DiffPure [28] to employ diffusion models to disrupt deliberate perturbations while preserving semantics in the training images. On the other hand, training-phase defense algorithms apply specific modifications to the training phase to defense against availability attacks. Adversarial training has long been considered the most effective defense mechanism [17, 10] against such attacks. Recent report [35] finds that peak accuracy can be reached early in the training of availability poisons, and thus early stopping can be an effective mean of training-phase defense. Adversarial augmentations [30] sample multiple augmentations on one image, and train models on the maximum loss of all augmented images to prevent learning from poisoning shortcuts. For referential baselines, APBench also includes commonly used data augmentation techniques such as Gaussian blur, random crop and flip (standard training), CutOut [6], CutMix [46], and MixUp [47], and show their (limited) effect in mitigating availability poisons. ### 2.3 Related Benchmarks Availability poisoning is closely connected to the domains of adversarial and backdoor attack and defense algorithms. Adversarial attacks primarily aim to deceive models with adversarial perturbations during inference to induce misclassifications. There are several libraries and benchmarks available for evaluating adversarial attack and defense techniques, such as Foolbox [31], AdvBox [13], and RobustBench [4]. Backdoor or data poisoning [3] attacks focus on injecting backdoor triggers into the training algorithm or data respectively, causing trained models to misclassify images containing these triggers while maintaining or minimally impacting clean accuracy. In contrast to APAs, such attacks introduce hidden behaviors into the model that can be triggered by specific inputs, often for malicious purposes. Benchmark libraries specifically designed for backdoor attacks and defenses include TrojanZoo [29], Backdoornbench [42], and Backdoorbox [22]. Moreover, [38, 11] introduce benchmarks and frameworks for data poisoning attacks. However, there is currently a lack and an urgent need of a dedicated and comprehensive benchmark that standardizes and evaluates availability poisoning attack and defense strategies. To the best of our knowledge, APBench is the first benchmark that fulfills this purpose. It offers an extensive library of recent attacks and defenses, explores various perspectives, including the impact of poisoning rates and model architectures, as well as attack transferability. We hope that APBench can make significant contributions to the community and foster the development of future availability attacks for effective privacy protection. 3 A Unified Availability Poisoning Benchmark As shown in Figure 2, APBench consists of three main components: (a) The availability poisoning attack module. This library includes a set of representative availability poisoning attacks that can generate unlearnable versions of a given clean dataset. (b) The poisoning defense module. This module integrates a suite of state-of-the-art defenses that can effectively mitigate the unlearning effect and restore clean accuracies to a certain extent. (c) The evaluation module. This module can efficiently analyze the performance of various availability poisoning attack methods using accuracy metrics and visual analysis strategies. ![Figure 2: The overall system design of APBench.](image) We built an extensible codebase as the foundation of APBench. In the attack module, we provide a total of 9 availability poisoning attacks of 3 different perturbation types ($\ell_p$) for supervised learning, and 2 attacks for unsupervised learning. For each availability poisoning attack method, we can generate their respective poisoned datasets. This module also allows us to further expand to different perturbations budgets, poisoning ratios, and easily extend to future poisoning methods. Using the poisoned datasets generated by the attack module, we can evaluate defenses through the defense module. The goal of this module is to ensure that models trained on unlearnable datasets can still generalize well on clean data. The defense module primarily achieves poisoning mitigation through data preprocessing or training-phase defenses. Finally, the evaluation module computes the accuracy... Table 1: Availability poisoning attack algorithms implemented in APBench. “Type” and “Budget” respectively denotes the type of perturbation and its budget. “Mode” denotes the training mode, where “S” and “U” and respectively mean supervised and unsupervised training. “No surrogate” denotes whether the attack requires access to a surrogate model for perturbation generation. “Class-wise” and “Sample-wise” indicate if the attack supports class-wise and sample-wise perturbation generation. “Stealthy” denotes whether the attack is stealthy to human. | Attack Method | Type | Budget | Mode | No surrogate | Class-wise | Sample-wise | Stealthy | |---------------|------|--------|------|--------------|------------|-------------|----------| | DC [8] | S | | S | | | | | | NTGA [45] | S | | S | | | | | | HYPO [40] | S | | S | | | | | | EM [17] | $\ell_\infty$ | 8/255 | S | | | | | | REM [10] | S | | S | | | | | | TAP [9] | S | | S | | | | | | UCL [14] | U | | U | | | | | | TUE [32] | U | | U | | | | | | LSP [44] | $\ell_2$ | 1.30 | S | | | | | | AR [36] | $\ell_2$ | 1.00 | S | | | | | | OPS [43] | $\ell_0$ | 1 | S | | | | | Table 2: Availability poisoning defense algorithms implemented in APBench. | Defense Method | Type | Time Cost | Description | |----------------|------|-----------|-------------| | Standard | Data augmentations | Low | Random image cropping and flipping | | CutOut [6] | Data augmentations | Low | Random image erasing | | MixUp [47] | Data augmentations | Low | Random image blending | | CutMix [46] | Data augmentations | Low | Random image cutting and stitching | | Gaussian (used in [23]) | Data preprocessing | Low | Image blurring with a Gaussian kernel | | BDR (used in [23]) | Data preprocessing | Low | Image bit-depth reduction | | Gray (used in [23]) | Data preprocessing | Low | Image grayscale transformation | | JPEG (used in [23]) | Data preprocessing | Low | Image compression | | AVATAR [7] | Data preprocessing | High | Image corruption and restoration | | Early stopping [35] | Training-phase defense | Low | Finding peak validation accuracy | | UEraser-Lite [30] | Training-phase defense | Low | Stronger data augmentations | | UEraser-Max [30] | Training-phase defense | High | Adversarial augmentations | | AT [25] | Training-phase defense | High | Adversarial training | metrics of different attacks and defense combinations, and can also perform qualitative visual analyses to help understand the characteristics of the datasets. Our benchmark currently includes 9 supervised and 2 unsupervised availability poisoning attacks, 9 defense algorithms, and 4 traditional image augmentation methods. In Table 1 and Table 2, we provide a brief summary of the properties of attack and defense algorithms. More detailed descriptions for each algorithm are provided in Appendix B. 4 EVALUATIONS Datasets We evaluated our benchmark on 4 commonly used datasets (CIFAR-10 [20], CIFAR-100 [20], SVHN [27], and an ImageNet [5] subset) and 5 mainstream models (ResNet-18 [15], ResNet-50 [15], MobileNetV2 [34], and DenseNet-121 [16]). To ensure a fair comparison between attack and defense methods, we used only the basic version of training for each model. Appendix A summarizes the specifications of the datasets and the test accuracies achievable through standard training on clean training data, and further describes the detail specifications of each dataset. Attacks and defenses We evaluated combinations of availability poisoning attacks and defense methods introduced in Section 3. Moreover, we explored 5 different data poisoning rates and 5 different models. In addition, We also explore two availability poisonings for unsupervised learning (UCL [14] and TUE [32]) and evaluate them on the recently proposed defenses (Gray, JPEG, Early stopping (ES), UEraser-Lite [30], and AVATAR [7]). The implementation details of all algorithms and additional results can be found in Appendix B. Types of Threat Models We can classify adversarial attacks based on three distinct availability poisoning threat models: $\ell_\infty$-bounded attacks (DC, NTGA, EM, REM, TAP, and HYPO); $\ell_2$-bounded attacks (LSP and AR); an $\ell_0$-bounded attack (OPS). Given that $\ell_0$ perturbations resist disruption from image preprocessing or augmentations and remain unaffected by $\ell_\infty$ adversarial training, the $\ell_0$-bounded OPS attack demonstrates robustness against a plethora of defenses. Conversely, in terms of stealthiness, the $\ell_0$ attacks are less subtle than their $\ell_\infty$ and $\ell_2$ counterparts, as illustrated in Figure 1. Perturbations bounded by both $\ell_\infty$ and $\ell_2$ are comparable w.r.t. the degree of visual stealthiness and effectiveness. Importantly, the two $\ell_2$-bounded attacks (LSP and AR) do not require surrogate model training, and are thus more efficient in the unlearnable examples synthesis. Training settings We trained the CIFAR-10, CIFAR-100 and ImageNet-subset models for 200 epochs and the SVHN models for 100 epochs. We used the stochastic gradient descent (SGD) optimizer with a momentum of 0.9 and a learning rate of 0.1 by default. As for unsupervised learning, all experiments are trained for 500 epochs with the SGD optimizer. The learning rate is 0.5 for SimCLR [1] and 0.3 for MoCo-v2 [2]. Please note that we generate sample-wise perturbations for all availability poisoning attacks. Specific settings for each defense method may have slight differences, and detailed information can be found in the Appendix C. Standard Scenario To start, we consider a common scenario where both the surrogate model and target model are ResNet-18, and the poisoning rate is set to 100%. We first evaluate the performance of the supervised poisoning methods against 4 state-of-the-art defense mechanisms and 4 commonly used data augmentation strategies. Table 3 presents the evaluation results on CIFAR-10 from our benchmark. It is evident that the conventional data augmentation methods appear to be ineffective against all poisoning methods. Yet, even simple image compression methods (BDR, grayscale, and JPEG corruption from ISS [23]) demonstrate a notable effect in mitigating the poisoning attacks, but fails to achieve high clean accuracy. Despite requiring more computational cost or additional resources (pretrained diffusion models for AVATAR), methods such as UEraser-Max [30] and AVATAR [7], generally surpass the image compression methods from ISS in terms of effectiveness. While AVATAR is inferior to UEraser-Max in gaining accuracy, it decouples the defense into an independent data sanitization phase, allowing it to be directly used in all existing training scenarios. While the early stopping (ES) method can be somewhat effective as a defense, is not usually considered a good one. mainly due to the fact that the peak accuracy of the availability poisoning is not ideal. Adversarial training appears effective but in many cases is outperformed by even a simple JPEG compression, it also fails notably against OPS, as the $\ell_\infty$ perturbation budget cannot mitigate $\ell_0$ threats. We further conduct experiments on the CIFAR-100, SVHN, and ImageNet-subset datasets, and the results are shown in Table 4. Our findings indicate that perturbations constrained by traditional $\ell_p$ norms are ineffective against adversarial augmentation (UEraser-Max), and image restoration by pretrained diffusion models (AVATAR), as they break free from the assumption of $\ell_p$ constraints. Even simple image compression techniques (JPEG, Grayscale, and BDR) can effectively remove the effect of perturbations. At this stage, availability poisoning attacks that rely on $\ell_p$-bounded perturbations may not be as effective as initially suggested by the relevant attacks. 4.1 Challenging Scenarios To further investigate the effectiveness and robustness of availability poisoning attacks and defenses, we conducted evaluations in more challenging scenarios. We considered partial poisoning scenarios, larger perturbation poisoning, and the attack transferability to different models. Partial poisoning In realistic scenarios, it is difficult for an attacker to achieve modification of the entire dataset. We thus investigate the impact of poisoning rate on the performance of availability poisoning. Figure 3 presents the results on CIFAR-10 and ResNet-18, w.r.t. each poisoning rate for attack-defense pairs, where each subplot corresponds to a specific poisoning attack method. We explore four different poisoning rates (20%, 40%, 60%, 80%). Privacy protection under partial poisoning As can be seen in Figure 3, the test accuracy of the model in the case of partial poisoning is only slightly lower than that in the case of a completely Table 3: Test accuracies (%) of models trained on poisoned CIFAR-10 datasets. The model trained on a clean CIFAR-10 dataset attains an accuracy of 94.32%. | Method | Standard | CutOut | CutMix | MixUp | Gaussian | BDR | Gray | JPEG | ES | U-Max | AVATAR | AT | |--------|----------|--------|--------|-------|----------|-----|------|------|----|-------|---------|----| | DC | 15.19 | 19.94 | 17.91 | 25.07 | 16.10 | 67.73| 85.55| 83.57| 26.08| 92.17 | 82.10 | 76.85| | EM | 20.78 | 18.79 | 22.28 | 31.14 | 14.71 | 37.94| 92.03| 80.72| 25.39| 93.61 | 75.62 | 82.51| | REM | 17.47 | 21.96 | 26.22 | 43.07 | 21.80 | 58.60| 92.27| 85.44| 31.32| 92.43 | 82.42 | 77.46| | HYPO | 70.38 | 69.04 | 67.12 | 74.25 | 62.17 | 74.82| 63.35| 85.21| 70.52| 88.44 | 85.94 | 81.49| | NTGA | 22.76 | 13.78 | 12.91 | 20.59 | 19.95 | 59.32| 70.41| 68.72| 28.19| 86.78 | 86.22 | 69.70| | TAP | 6.27 | 9.88 | 14.21 | 15.46 | 7.88 | 70.75| 11.01| 84.08| 39.54| 79.05 | 87.75 | 79.92| | LSP | 13.06 | 14.96 | 17.69 | 18.77 | 18.61 | 53.86| 64.70| 80.14| 29.10| 92.83 | 76.90 | 81.38| | AR | 11.74 | 10.95 | 12.60 | 14.15 | 13.83 | 36.14| 35.17| 84.75| 44.29| 90.12 | 88.60 | 81.15| | OPS | 14.69 | 52.98 | 64.72 | 49.27 | 13.38 | 37.32| 19.88| 78.48| 38.20| 77.99 | 66.16 | 14.95| Table 4: Test accuracies (%) on poisoned CIFAR-100, SVHN and ImageNet-subset datasets. | Dataset | Method | Standard | CutOut | CutMix | MixUp | Gaussian | BDR | Gray | JPEG | ES | U-Max | |---------------|--------|----------|--------|--------|-------|----------|-----|------|------|----|-------| | CIFAR-100 | EM | 3.03 | 4.15 | 3.98 | 6.46 | 2.99 | 34.10| 59.14| 58.71| 7.06| 68.81 | | | REM | 3.73 | 4.00 | 3.71 | 10.90 | 3.59 | 29.16| 57.47| 55.60| 10.99| 67.72 | | | LSP | 2.56 | 2.33 | 4.52 | 4.86 | 1.71 | 27.12| 39.45| 52.82| 9.52 | 68.31 | | | AR | 1.87 | 1.63 | 3.17 | 2.35 | 2.62 | 31.15| 16.13| 54.73| 26.58| 55.95 | | SVHN | EM | 10.33 | 13.38 | 10.77 | 12.79 | 8.82 | 36.65| 65.66| 86.14| 13.47| 90.24 | | | REM | 14.02 | 18.92 | 9.55 | 19.56 | 7.54 | 42.52| 19.59| 90.58| 19.61| 88.26 | | | LSP | 12.16 | 12.98 | 8.17 | 18.86 | 7.15 | 26.67| 16.90| 84.06| 12.91| 90.64 | | | AR | 19.23 | 14.92 | 6.71 | 13.52 | 7.75 | 39.24| 10.00| 92.46| 89.32| 90.07 | | ImageNet-100 | EM | 2.94 | 4.05 | 4.73 | 4.15 | 3.15 | 6.45 | 12.20| 31.73| 8.80 | 44.07 | | | REM | 3.66 | 4.13 | 4.78 | 3.94 | 4.28 | 4.03 | 3.95 | 40.98| 17.19| 42.14 | | | LSP | 38.52 | 40.56 | 29.78 | 7.85 | 42.68 | 26.58| 25.18| 36.83| 39.52| 63.28 | Figure 3: The efficacy in test accuracies (%, vertical axes) of defenses (No defense, Grayscale, JPEG, and UEraser-Max) against different partial poisoning attacks including EM (a), REM (b), LSP (c), and AR (d) with poisoning ratios (horizontal axes) ranging from 20% to 80%. clean dataset. This raises the following question: Are APAs effective in protecting only a portion of the training data? To answer, we introduce poisoning perturbations with APAs to a varying portion of the training data, and investigate how well the models learn the origin features that exist in the poisoned images for different poisoning rates. For this, Figure 4 evaluates and compares the mean losses of the unlearnable images used during training (“Unlearnable”), the origin images of the unlearnable part (“Clean”), and for reference, the mean losses of images unseen by the model from the test set (“Test”), and “Train” means the loss of the clean part of the training set. We find that the losses on the original images of the unlearnable part is similar to that of the test set, or even lower. This suggests that the availability poisoning perturbations can reasonably protect the private data against undefended learning. For a similar comparison of accuracies, please refer to Appendix C.1. Larger perturbations We increased the magnitude of perturbations in availability poisoning attacks to further evaluate the performance of attacks and defenses. Table 5 presents the results of availability poisoning with larger perturbations on CIFAR-10. Due to such significant perturbations, their stealthiness is further reduced, making it challenging to carry out such attacks in realistic scenarios. However, larger perturbations indeed have a more pronounced impact on suppressing defense performance, leading to significant accuracy losses for all defense methods. There exists a trade-off between perturbation magnitude and accuracy recovery. Considering that at larger perturbations, availability poisoning is dramatically less stealthy, and some defense methods are still effective, it is not recommended to use larger perturbations. Figure 4: The mean losses (vertical axes) indicate that original features in unlearnable examples are not learned by the model. All evaluations consider partial poisoning scenarios (poisoning rates from 20% to 80%, horizontal axes). Note that “Unlearnable” and “Original” respectively denote the set of unlearnable examples, and their original clean variants, “Train” means the loss of the clean part of the training set, and “Unseen” denote images from the test set unobserved during model training. Table 5: Test accuracies (%) on poisoned CIFAR-10 datasets with increased perturbations. | Method | Budget | No defense | Gray | JPEG | ES | U-Max | AT | |--------|--------|------------|------|------|----|-------|----| | EM | $\ell_\infty = 16/255$ | 18.74 | 76.76 | 55.96 | 27.39 | 88.09 | 77.82 | | REM | $\ell_\infty = 16/255$ | 19.80 | 83.65 | 80.07 | 33.07 | 80.36 | 75.64 | | LSP | $\ell_2 = 1.74$ | 15.83 | 37.60 | 42.83 | 27.30 | 87.20 | 77.92 | | AR | $\ell_2 = 1.50$ | 11.20 | 26.10 | 78.24 | 20.96 | 68.42 | 70.14 | Table 6: Clean test accuracies of different CIFAR-10 target models, where attacks are oblivious to the model architectures. Note that AR and LSP are surrogate-free, and for EM and REM the surrogate model is ResNet-18. | Model | Clean | Method | No defense | Gray | JPEG | ES | U-Max | AVATAR | |-------------|-------|--------|------------|------|------|----|-------|--------| | ResNet-50 | 94.47 | EM | 14.41 | 83.40 | 76.88 | 26.69 | 85.89 | 77.64 | | | | REM | 16.26 | 87.26 | 75.79 | 31.37 | 92.69 | 83.68 | | | | LSP | 19.23 | 68.94 | 73.24 | 32.73 | 93.08 | 76.47 | | | | AR | 11.83 | 27.51 | 80.24 | 28.66 | 81.40 | 86.39 | | SENet-18 | 94.83 | EM | 13.60 | 86.03 | 79.35 | 16.35 | 83.27 | 74.22 | | | | REM | 20.99 | 84.50 | 78.92 | 22.85 | 93.17 | 84.37 | | | | LSP | 18.54 | 65.06 | 76.51 | 26.38 | 92.53 | 75.19 | | | | AR | 13.68 | 34.26 | 79.29 | 37.04 | 75.06 | 84.37 | | MobileNetV2 | 94.62 | EM | 15.62 | 77.21 | 70.96 | 16.71 | 82.71 | 75.62 | | | | REM | 20.83 | 80.81 | 72.27 | 21.92 | 91.03 | 82.77 | | | | LSP | 16.82 | 61.07 | 72.03 | 28.12 | 92.10 | 76.81 | | | | AR | 13.36 | 28.54 | 68.14 | 39.45 | 73.40 | 81.63 | | DenseNet-121| 95.08 | EM | 13.89 | 82.49 | 78.42 | 15.68 | 82.37 | 76.69 | | | | REM | 21.45 | 85.47 | 78.42 | 22.35 | 93.09 | 83.04 | | | | LSP | 18.94 | 67.95 | 74.90 | 26.86 | 93.47 | 78.22 | | | | AR | 13.43 | 25.51 | 81.12 | 36.51 | 82.36 | 89.92 | | ViT-small | 84.66 | EM | 21.47 | 80.42 | 72.64 | 30.91 | 74.29 | 54.84 | | | | REM | 32.17 | 79.65 | 74.92 | 43.07 | 83.27 | 73.57 | | | | LSP | 29.06 | 59.34 | 68.07 | 32.69 | 87.01 | 66.74 | | | | AR | 25.04 | 38.90 | 74.77 | 45.54 | 63.90 | 78.64 | | CaiT-small | 71.96 | EM | 17.01 | 64.76 | 63.75 | 39.69 | 63.37 | 41.94 | | | | REM | 26.11 | 65.05 | 66.43 | 47.39 | 72.05 | 62.53 | | | | LSP | 25.08 | 63.06 | 57.15 | 37.95 | 70.92 | 51.39 | | | | AR | 68.63 | 66.27 | 69.30 | 67.41 | 70.04 | 62.77 | **Attack transferability across models** In real-world scenarios, availability poisoning attackers can only manipulate the data and do not have access to specific details of the defender. Therefore, we conducted experiments on different model architectures. It is worth noting that all surrogate-based attack methods are considered using ResNet-18. The results are shown in Table 6. It is evident that all surrogate-based and -free poisoning methods exhibit strong transferability, while the three recently proposed defenses also achieve successful defense across different model architectures. **The only exception is the AR method, which fails against CaiT-small.** Table 7: Test accuracies (%) of adaptive poisoning with EM on ResNet-18. | Method | Standard | Gray | JPEG | U-Max | |--------------|----------|--------|-------|-------| | EM + Gray | 19.48 | 21.64 | 78.39 | 90.52 | | EM + JPEG | 20.67 | 90.29 | 76.25 | 93.22 | | EM + UEraser | 35.24 | 88.62 | 80.46 | 89.55 | Table 8: Test accuracies (%) of adaptive poisoning with REM on ResNet-18. | Method | Standard | Gray | JPEG | U-Max | |--------------|----------|--------|-------|-------| | REM + Gray | 16.70 | 56.33 | 82.47 | 91.37 | | REM + JPEG | 19.45 | 91.71 | 75.84 | 92.53 | | REM + UEraser| 21.61 | 89.26 | 77.51 | 91.84 | Adaptive poisoning We evaluated strong adaptive poisons against various defenses using two poisoning methods, EM [17] and REM [10]. We assume that the defenders can be adapted to three defenses (Gray, JPEG, and UEraser), by using the attack in the perturbation generation process. From Tables 7 and 8, it can be seen that adaptive poisoning significantly affects the performance of the Gray defense, but has less effect on JPEG and UEraser. Unsupervised learning We evaluated the availability poisoning attacks targeting unsupervised models on CIFAR-10. We considered two popular unsupervised learning frameworks: SimCLR [1] and MoCo-v2 [2]. All defense methods were applied before the data augmentation process, which means they were applied to preprocessed images before undergoing different data augmentations. Therefore, we only applied UEraser-Lite as a data preprocessing method. The results of all experiments are shown in Table 9. Table 9: Performance of availability poisoning attacks and defense on different unsupervised learning algorithms and datasets. Note that “U-Lite” denotes UEraser-Lite. | Algorithm | Method | No Defense | Gray | JPEG | U-Lite | AVATAR | |-----------|--------|------------|------|------|--------|--------| | SimCLR | UCL | 47.25 | 46.91| 66.76| 68.42 | 83.22 | | | TUE | 57.10 | 56.37| 67.54| 66.59 | 84.24 | | MoCo-v2 | UCL | 53.78 | 53.34| 65.44| 72.13 | 83.08 | | | TUE | 66.73 | 64.95| 67.28| 74.82 | 82.48 | Visual analyses We provide visualization tools (Grad-CAM [39] and Shapley value maps [24]) to facilitate the analysis and understanding of availability poisoning attacks. We also use t-SNE [41] to visualize the availability poisons (Figure 7). Although t-SNE cannot accurately represent high-dimensional spaces, it aids in the global visualization of feature representations, allowing us to observe specific characteristics of availability poisons. For additional discussions on the visualizations, please refer to Appendix C.3. Future outlook Future research directions on APAs should explore methods that enhance the resilience of perturbations. One approach to consider is the development of generalizable attacks, which can simultaneously target the DNNs being trained, diffusion models for image restoration, and remain robust against traditional or color distortions, among others. On the other hand, semantic-based perturbations offer an alternative strategy, as such modifications to images can be challenging to remove by defenses. 5 CONCLUSIONS We have established the first comprehensive and up-to-date benchmark for the field of availability poisoning, covering a diverse range of availability poisoning attacks and state-of-the-art defense algorithms. We have conducted effective evaluations and analyses of different combinations of attacks and defenses, as well as additional challenging scenarios. Through this new benchmark, our primary objective is to provide researchers with a clearer understanding of the current progress in the field of availability poisoning attacks and defenses. We hope it can enable rapid comparisons between existing methods and new approaches, while also inspiring fresh ideas through our comprehensive benchmark and analysis tools. We believe that our benchmark will contribute to the advancement of availability poisoning research and the development of more effective methods to safeguard privacy. 6 REPRODUCIBILITY STATEMENT We provide an open-source implementation of all attacks and defenses in the supplementary material. Following the README file, users can run all experiments on their own device to reproduce the results shown in paper. 7 ETHICS STATEMENT Similar to many other technologies, the implementation of availability poisoning algorithms can be used by users for both beneficial and malicious purposes. We understand that these poisoning attack methods were originally proposed to protect privacy, but they can also be used to generate maliciously data to introduce model backdoors. The benchmark aims to promote an understanding of various availability poisoning attacks and defense methods, as well as encourage the development of new algorithms in this field. It is also important for us to raise awareness of the false sense of security provided by availability poisoning attacks. However, we emphasize that the use of these algorithms and evaluation results should comply with ethical guidelines and legal regulations. We encourage users to be aware of the potential risks of the technology and take appropriate measures to ensure its beneficial use for both society and individuals. REFERENCES [1] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020. [2] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020. [3] Antonio Emanuele Ciña, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, and Fabio Roli. Wild patterns reloaded: A survey of machine learning security against training data poisoning. ACM Comput. Surv., 55(13s), jul 2023. [4] Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo DeBenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670, 2020. [5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE, 2009. [6] Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. [7] Hadi M Dolatabadi, Sarah Erfani, and Christopher Leckie. The devil’s advocate: Shattering the illusion of unexploitable data using diffusion models. arXiv preprint arXiv:2303.08500, 2023. [8] Ji Feng, Qi-Zhi Cai, and Zhi-Hua Zhou. Learning to confuse: generating training time adversarial data with auto-encoder. Advances in Neural Information Processing Systems, 32, 2019. [9] Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojciech Czaja, and Tom Goldstein. Adversarial examples make strong poisons. Advances in Neural Information Processing Systems, 34:30339–30351, 2021. [10] Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen, and Dacheng Tao. Robust unlearnable examples: Protecting data against adversarial learning. In International Conference on Learning Representations, 2022. [11] Jonas Geiping, Liam H Fowl, W. Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, and Tom Goldstein. Witches’ brew: Industrial scale data poisoning via gradient matching. In International Conference on Learning Representations, 2021.
2h3m61LFWL
My main concern is the computational efficiency of the VBMLE method. The authors claim VBMLE method is computationally efficient. But in each time step, VBMLE needs to solve a hard optimization problem (Equation (10) in the paper). And as $t$ gets larger, the problem seems increasingly difficult to handle. The authors say they handle the optimization problem by
VALUE-BIASED MAXIMUM LIKELIHOOD ESTIMATION FOR MODEL-BASED REINFORCEMENT LEARNING IN DISCOUNTED LINEAR MDPs Anonymous authors Paper under double-blind review ABSTRACT We consider the infinite-horizon linear mixture Markov Decision Processes (MDPs), where the transition probabilities of the dynamic model can be linearly parameterized with the help of a predefined low-dimensional feature mapping. While the existing regression-based approaches have been theoretically shown to achieve nearly-optimal regret, they are computationally rather inefficient due to the need for a large number of optimization runs in each time step, especially when the state and action spaces are large. To address this issue, we propose to solve linear mixture MDPs through the lens of Value-Biased Maximum Likelihood Estimation (VBMLE), which is a classic model-based exploration principle in the adaptive control literature for resolving the well-known closed-loop identification problem of Maximum Likelihood Estimation. We formally show that (i) VBMLE enjoys $\tilde{O}(d\sqrt{T})$ regret, where $T$ is the time horizon and $d$ is the dimension of the model parameter, and (ii) VBMLE is computationally more efficient as it only requires solving one optimization problem in each time step. In our regret analysis, we offer a generic convergence result of MLE in linear mixture MDPs through a novel supermartingale construct and uncover an interesting connection between linear mixture MDPs and online learning, which could be of independent interest. Finally, the simulation results show that VBMLE significantly outperforms the benchmark methods in both empirical regret and computation time. 1 INTRODUCTION Model-based reinforcement learning (MBRL) is one fundamental paradigm that learns an optimal policy by alternating between two subroutines: estimation of the transition dynamics and planning according to the learned dynamics model. MBRL has been extensively studied in the tabular setting from various perspectives, including [Auer et al., 2008; Azar et al., 2017], which have been shown to achieve either optimal regret bounds or sample complexity. Despite the above success, the conventional tabular MBRL methods are known to be computationally intractable in RL problems with large state or action spaces due to the need for direct estimation and access to the per-state transition probability. To enable MBRL for large state and action spaces, one important recent attempt is to study Markov decision processes with linear feature mappings [Zhou et al., 2021b], which is termed linear mixture MDP subsequently in this paper. Specifically, linear mixture MDPs assume that the probability of each transition can be represented by $\langle \phi(s'|s,a), \theta^* \rangle$, where $\phi(\cdot|\cdot,\cdot)$ is a known feature function for each possible transition, and $\theta^*$ parametrizes the transition probabilities to be learned. This framework can readily encompass various related formulations, such as tabular MDPs, feature-based linear transition models [Yang & Wang, 2019], the linear combination of base models [Modi et al., 2020], and linear value function frameworks [Zanette et al., 2020]. Based on the existing literature of linear mixture MDPs, the existing approaches could be divided into two primary categories depending on the length of the learning horizon: episodic MDPs and infinite-horizon discounted MDPs. In episodic MDPs, one important feature is that the environment state could be conveniently reset to some initial state when a new episode starts. Several recent works have explored episodic MDPs through the use of value-targeted regression techniques, e.g., [Ayoub et al., 2020; Zhou et al., 2021a]. A more detailed survey of the related works for episodic MDPs is deferred to Section 2. By contrast, in infinite-horizon linear mixture MDPs, due to the absence of the periodic restart, addressing exploration and conducting regret analysis could be even more challenging than that in episodic MDPs. Some recent attempts tackle infinite-horizon linear mixture MDPs by designing regression-based approaches and establishing theoretical guarantees (Zhou et al., 2021a,b; Chen et al., 2022). However, their algorithms could suffer from high computational complexity and be intractable in practice for the following reasons: (i) In the existing regression-based approaches, in each time step, one would need to solve a constrained optimization problem of the action-value function for each state-action pair. (ii) Moreover, in order to represent the value function as a linear combination of the learned parameter vector $\theta$, it is necessary for those regression-based approaches to construct a vector $\phi_V(s, a) := \sum_{s' \in S} \phi(s'|s, a)V(s')$. As a result, the action-value function can be expressed as follows: $Q(s, a) = \langle \phi_V(s, a), \theta \rangle$. However, constructing $\phi_V$ could be computationally intractable when dealing with a large state space. These limitations render the regression-based approaches mentioned above rather challenging to implement and deploy in practice. Therefore, one important research question remains to be answered: **How to design an efficient model-based RL algorithm for infinite-horizon discounted linear mixture MDPs with provable regret guarantees?** In this paper, we answer the above question affirmatively. Specifically, to address the above limitations, we design a tractable approach based on the classic principle of Value-Biased Maximum Likelihood Estimation (VBMLE) (Kumar & Lin, 1982), which has shown promising results in recent developments in bandits (Hung et al., 2021; Hung & Hsieh, 2023) and tabular RL (Mete et al., 2021), and leverage the value biasing technique to enforce exploration. The major advantage of VBMLE is that with the help of value biasing, it requires solving only one optimization problem for learning the dynamics model parameter at each time step and thereby enjoys a significantly lower computational complexity than the regression-based approaches. Moreover, we formally establish $\tilde{O}(d\sqrt{T})$ regret bound based on the following novel insights: (i) We establish a convergence result on the Maximum Likelihood Estimator for linear mixture MDPs by using a novel supermartingale approach. (ii) Through this construct, we also find useful connections between (i) the linear mixture MDPs and online portfolio selection problem as well as (ii) VBMLE and the Follow-the-Leader algorithm in online learning. We highlight the main contributions as follows: - We adapt the classic VBMLE principle to the task of learning the dynamic model for linear mixture MDPs. Our proposed algorithm addresses model-based RL for linear mixture MDPs from a distributional perspective, which learns the parameterized transition directly by maximum likelihood estimation without resorting to regression, and guides the exploration via value biasing instead of using concentration inequalities. - We establish the theoretical regret bound of VBMLE by providing a novel theorem connected to the confidence ellipsoid of MLE. Furthermore, we uncover an interesting connection between online learning and our regret analysis. - We conduct an empirical analysis to assess both the computational complexity and empirical regret performance. The simulation results demonstrate that VBMLE exhibits a clear advantage in terms of both effectiveness in regret and computational efficiency. ## 2 RELATED WORKS **VBMLE for Multi-Armed Bandits and RL.** Regarding VBMLE, various prior works have applied this method to different bandit settings and tabular MDP. Firstly, Liu et al. (2020) focuses on solving non-contextual bandits with exponential family reward distributions. Next, Hung et al. (2021) introduces two variations of VBMLE: LinRBMLE and GLM-RBMLE. These methods are designed for solving linear contextual bandits and result in an index policy. Furthermore, Hung & Hsieh (2023) leverages the representation power of neural networks and proposes NeuralRBMLE. This approach is specifically designed for solving neural bandits, making no assumptions about the unknown reward distribution. As for the MDP setting, Mete et al. (2021) has adapted VBMLE to solve tabular MDPs, where the states and actions belong to a known finite set, while Mete et al. (2022) analyzed the finite performance of a constrained version of VBMLE. By contrast, this paper takes the very first step towards understanding the theoretical regret performance of VBMLE in RL beyond the tabular settings. Episodic MDPs with Function Approximation. Based on the class of transition dynamics models, the episodic MDPs with function approximation could be divided into the following categories: - **Linear mixture MDPs**: To tackle large MDPs, one common approach is to leverage linear mixture MDPs, which enables a compact model representation through feature mapping. For instance, from a model-free perspective, (Cai et al., 2020) proposes an optimistic variant of Proximal Policy Optimization algorithm (OPPO) to address exploration in linear mixture MDPs. (Ayoub et al., 2020) addresses linear mixture MDPs by proposing UCRL-VTR, which extends the classic UCRL algorithm (Jaksch et al., 2010) by using the value-targeted model regression as an optimistic approach for constructing the confidence set. Both the above methods achieve $\tilde{O}(d\sqrt{H^3T})$ regret bound, where $H$ is the episode length, $d$ is the feature dimension, and $T$ is the total steps. Later, (Zhou et al., 2021a) provides a new tail inequality and adapts the weighted ridge regression to UCRL-VTR to improve the regret bound $\tilde{O}(dH\sqrt{T})$. Moreover, (Yang & Wang, 2020) studies an interesting type of linear mixture MDPs that are bilinear in two feature embeddings and proposes MatrixRL to achieve $\tilde{O}(dH^2\sqrt{T})$ regret bound. More recently, (He et al., 2022) proposes to improve OPPO with a new Bernstein-type bonus and achieve a near-optimal $\tilde{O}(dH\sqrt{T})$ regret. - **Linear MDPs**: Another related but different type of linear models is the linear MDP, where the transition model takes the form of the inner product between a state-action feature vector and the vector of unknown measures over states. For instance, (Jin et al., 2020) presents an optimistic variant of Least-Squares Value Iteration that achieves $\tilde{O}(\sqrt{d^3H^3T})$ regret. (Wang et al., 2019) studies another related but more general class of MDPs with generalized linear function approximation and an optimistic closure assumption and presents value-based approaches with $\tilde{O}(H\sqrt{d^3T})$ regret bound. - **General model classes**: Recently, there are several works that address episodic MDPs under general function approximation, where a class of possible transition models is given to the algorithm. For instance, (Zhang, 2022) proposes a variant of Thompson sampling to favor models with high rewards for more aggressive exploration. Later, (Zhong et al., 2022) presents a new complexity measure, namely the generalized eluder coefficient, and proposes a variant of posterior sampling algorithm under a general model class. Infinite-Horizon Discounted MDPs With Function Approximation. Without the restart capability of episodic MDPs, infinite-horizon discounted MDPs pose a unique challenge of tackling planning and exploration in the same single trajectory. As a result, the theoretical understanding of this setting under function approximation remains limited. For example, under linear MDPs, (Yang & Wang, 2019) proposes a variant of Q-learning that requires $\tilde{O}(d/(1-\gamma)^2\epsilon^2)$ samples to find an $\epsilon$-optimal policy, where $\gamma$ is the discount factor. Under the linear mixture MDPs, (Zhou et al., 2021b) proposes the UCLK algorithm, which takes into consideration the confidence set of the least-square estimator of the model parameter and establishes a regret upper bound of $\tilde{O}(d\sqrt{T}/(1-\gamma)^2)$. Subsequently, (Zhou et al., 2021a) introduces an improved version of UCLK, called UCLK+, by incorporating the weighted ridge regression into the original UCLK and achieves a regret bound that matches the lower bound of $\tilde{O}(d\sqrt{T}/(1-\gamma)^{1.5})$ established by (Zhou et al., 2021b). On the other hand, (Chen et al., 2022) also provides a variant of UCLK, namely UPAC-UCLK, which achieves $\tilde{O}(d\sqrt{T}/(1-\gamma)^2) + \tilde{O}(\sqrt{T}/(1-\gamma)^3)$ regret bound along with uniform-PAC sample complexity guarantee. However, the above UCLK-based approaches are computationally inefficient as they all require the costly extended value iteration for each state-action pair in each policy update, and this is already not tractable in MDPs of moderate sizes. Our work falls in this category and presents VBMLE as a computationally efficient solution to infinite-horizon linear mixture MDPs. 3 Problem Formulation **Markov Decision Processes (MDP) and Linear Feature Mapping.** An MDP is denoted by $\mathcal{M} := \langle S, A, P, R, T, \mu_0 \rangle$, where $S$ and $A$ represent the state and action spaces, respectively, $P$ is the dynamic model, $R : S \times A \rightarrow [0, 1]$ is the reward function, $T$ is the time horizon, and $\mu_0$ is the initial state distribution with $\mu_0(s) > 0$. A linear mixture MDP is defined by the following: 1. As there is a policy that achieves optimal value for all initial states $s$, or equivalently, for all initial distributions $\mu_0$, without loss of generality, it is common to take a strictly positive initial distribution. • There exist an unknown parameter $\theta^* \in \mathbb{R}^d$, and a known feature mapping $\phi(\cdot|\cdot,\cdot) : S \times A \times S \to \mathbb{R}^d$, such that $P(s'|s,a) = \langle \phi(s'|s,a), \theta^* \rangle$, $\forall s', s \in S, a \in A$. • $\|\theta^*\|_2 \leq \sqrt{d}$ and $\|\phi(s'|s,a)\|_2 \leq L$, $\forall s', s \in S, a \in A$. Moreover, let $\mathcal{P}$ denote the set of parameters that correspond to the product of the simplices for each (state, action) pair: $$\mathcal{P} := \left\{ \theta : 0 \leq \langle \phi(\cdot|s,a), \theta \rangle \leq 1, \sum_{s' \in S} \langle \phi(s'|s,a), \theta \rangle = 1, \forall s \in S, a \in A \right\},$$ where $\theta$ denotes the parameter of the transition dynamics model and $\phi(\cdot|s,a)$ is the known feature mapping function. A policy $\pi : S \to \Delta(A)$, where $\Delta(A)$ is the set of all probability distributions on $A$, designed to maximize the sum of discounted reward, which is denoted by the value function: $$V^\pi(s; \theta) := \mathbb{E}_{a_i \sim \pi(\cdot|s_i), s_{i+1} \sim \phi(\cdot|s_i,a_i), \theta} \left[ \sum_{i=0}^{\infty} \gamma^i r(s_i,a_i) \middle| s_0 = s \right].$$ Similarly, the action value function $Q^\pi(s,a; \theta)$ is defined as $$Q^\pi(s,a; \theta) := \mathbb{E}_{a_i \sim \pi(\cdot|s_i), s_{i+1} \sim \phi(\cdot|s_i,a_i), \theta} \left[ \sum_{i=0}^{\infty} \gamma^i r(s_i,a_i) \middle| s_0 = s, a_0 = a \right].$$ Moreover, we let $J(\pi; \theta) := \mathbb{E}_{s \sim \mu_0}[V^\pi(s; \theta^*)]$ denote the mean reward achievable for the MDP with parameter $\theta$ under policy $\pi$ over the initial probability distribution $\mu_0$. **Optimal Value and Regret.** We then define the optimal value function to be the maximum value obtained by a policy: $V^*(s; \theta) = \max_\pi V^\pi(s; \theta)$. In the discounted linear mixture MDP setting (Zhou et al., 2021b), the cumulative regret $R(T)$ for the MDP with parameter $\theta^*$ is defined to be the total difference of value function between the optimal policy and the learned policy $\pi_t$, where $$R(T) := \sum_{t=1}^{T} \left[ V^*(s_t; \theta^*) - V^{\pi_t}(s_t; \theta^*) \right], \quad s_1 \sim \mu_0.$$ Based on the fundamental result that there exists a policy that achieves optimal value for all states, we use $\pi^*(\theta)$ to denote an optimal policy with respect to a given model parameter $\theta$ as $$\pi^*(\theta) := \arg\max_\pi J(\pi; \theta).$$ ### 4 VBMLE for Linear MDPs **Introduction to the VBMLE Principle.** We now introduce the idea behind the classic value biasing principle. Consider first the certainty equivalence principle (Kumar & Varaiya, 2015) employing the straightforward Maximum Likelihood Estimate (MLE) as $$\hat{\theta}_t := \arg\max_{\theta \in \mathcal{P}} \left\{ \prod_{i=1}^{t-1} p(s_{i+1}|s_i,a_i;\theta) \right\},$$ That is, at each time step $t$, the learner employs the policy $\pi^{\text{MLE}}_t$ that is optimal for the current estimate $\hat{\theta}_t$. Under appropriate technical conditions, it has been shown in (Borkar & Varaiya, 1979) that $\hat{\theta}_t$ converges almost surely to a random $\hat{\theta}_\infty$, for which $$p(s'|s, \pi^{\text{MLE}}_\infty(s); \hat{\theta}_\infty) = p(s'|s, \pi^{\text{MLE}}_\infty(s); \theta^*), \forall s, s' \in S,$$ where $\pi^{\text{MLE}}_\infty$ represents the optimal policy corresponding to $\hat{\theta}_\infty$. This convergence property is called the closed-loop identification property; it means that asymptotically the transition probabilities resulting from the application of the policy $\pi^{\text{MLE}}_\infty$ are correctly estimated. An important consequence is that $J(\pi^*(\hat{\theta}_\infty); \hat{\theta}_\infty) = J(\pi^*(\hat{\theta}_\infty); \theta^*)$. Since $\pi^{\text{MLE}}_\infty$ is not necessarily optimal for $\theta^*$, this implies $$J(\pi^*(v); \hat{\theta}_\infty) \leq J(\pi^*(\theta^*); \theta^*).$$ Algorithm 1 VBMLE for Reinforcement Learning in Linear Mixture MDPs 1: Input: $\alpha(t)$ 2: for $t = 1, 2, \cdots$ do 3: $\theta^V_t := \arg\max_{\theta \in \mathbb{P}} \left\{ \sum_{i=1}^{t-1} \log \langle \phi(s_{i+1}|s_i, a_i), \theta \rangle + \frac{\lambda}{2} \|\theta\|_2^2 + \alpha(t) \cdot V^*(s_t; \theta) \right\}$. 4: $a_t = \arg\max_{a \in \mathcal{A}} Q^*(s_t, a; \theta^R_t)$ 5: end for The idea of the Value-Biased method is to try to undo the bias in (8) by adding a bias term that favors parameters with larger optimal total return. This leads to the principle of Value-Biased Maximum Likelihood Estimate (VBMLE) originally proposed in the adaptive control literature by [Kumar & Lin, 1982] as follows: $$\theta^{\text{VBMLE}}_t := \arg\max_{\theta \in \mathbb{P}} \left\{ \sum_{i=1}^{t-1} \log p(s_{i+1}|s_i, a_i; \theta) + \alpha(t) \cdot J(\pi^*(\theta); \theta) \right\},$$ where $\alpha(t)$ is a positive increasing sequence that weights the bias in favor of parameters with larger total return. VBMLE employs this biasing method to handle the exploration-exploitation trade-off. VBMLE for Discounted Linear Mixture MDPs. In this paper, we adapt the VBMLE principle to the RL problem in the linear mixture MDP setting. Specifically, at each step, the learner would (i) choose the parameter estimate that maximizes the regularized log-likelihood plus the value-bias as $$\theta^V_t := \arg\max_{\theta \in \mathbb{P}} \left\{ \sum_{i=1}^{t-1} \log \langle \phi(s_{i+1}|s_i, a_i), \theta \rangle + \frac{\lambda}{2} \|\theta\|_2^2 + \alpha(t) \cdot V^*(s_t; \theta) \right\},$$ where $\lambda$ is a positive constant for regularization, and then (ii) employ an optimal policy with respect to $\theta^V_t$. Notice that the term $V^*(s_t; \theta)$ can be computed by using the standard Value Iteration presented as Algorithm 2 in Appendix. If there are multiple maximizers for (10), then one could break the tie arbitrarily. For clarity, we also summarize the procedure of VBMLE in Algorithm 1. Features of VBMLE for Linear Mixture MDPs. We highlight the salient features of the VBMLE method in Algorithm 1 as follows. • **Computational Efficiency:** As mentioned earlier, UCLK [Zhou et al., 2021b] suffers from high computational complexity as it requires computing an estimate of the model parameter for each state-action pair in each iteration. This renders UCLK intractable when either the state space or the action space is large. By contrast, the VBMLE approach, which applies value-bias to guide the exploration under the MLE, only requires solving one single maximization problem for the dynamics model parameter $\theta$ in each iteration, making it computationally efficient and superior. Accordingly, VBMLE could serve as a more computationally feasible algorithm for RL in linear mixture MDPs in practice. • **VBMLE is Parameter-Free:** As shown in Algorithm 1, the only parameter required by VBMLE is $\alpha(t)$, which determines the weight of the value bias. As will be shown in Section 5, one could simply choose $\alpha(t) = \sqrt{t}$ to achieve the required regret bound, and moreover this simple choice also leads to superior empirical regret performance. As a result, VBMLE is parameter-free and therefore does not require any hyperparameter tuning. • **Distributional Perspective:** In contrast to the existing RL methods for linear mixture MDPs [Ayoub et al., 2020; Zhou et al., 2021a,b; Chen et al., 2022] that aim to learn the unknown parameter via regression on the value function (or termed value-targeted regression), the proposed VBMLE takes a distributional perspective through directly learning the whole collection of transition probabilities through value-biased maximum likelihood estimation. This perspective has also been adopted by the prior works on applying VBMLE to the contextual bandit problems [Hung et al., 2021; Hung & Hsieh, 2023]. We also highlight the differences between VBMLE for RL and VBMLE for bandits in Appendix C. Remark 1. Due to the non-concavity of VBMLE, we propose to solve VBMLE by Bayesian optimization (BO), which is a powerful and generic method for provably maximizing (possibly non-concave) black-box objective functions. As a result, BO can provably find an $\epsilon$-optimal solution to VBMLE within finite iterations. Specifically: • We have applied the GP-UCB algorithm, which is one classic BO algorithm and has been shown to provably find an $\epsilon$-optimal solution within $\tilde{O}(1/\epsilon^2)$ iterations under smooth (possibly non-concave) objective functions \citep{Srinivas2012}. Each sample taken by GP-UCB requires only one run of standard Value Iteration in Algorithm 2. • To further demonstrate the compatibility of VBMLE and BO, we have extended the regret analysis of VBMLE to the case where only an $\epsilon$-optimal VBMLE solution is obtained. Specifically, let $H$ denote the number of samples taken by GPUCB in each maximization run of finding VBMLE. We show that VBMLE augmented with BO can achieve sub-linear regret as shown in Theorem 4. By using a moderate $H$, one could easily recover the same regret bound as that of VBMLE with an exact maximizer. In our experiments, we find that choosing $H = 25$ is sufficient and also computationally efficient. • The complexity of VBMLE with GP-UCB for finding $\theta_t^V$ is to solve the standard Value Iteration for only $H + 1$ times ($H$ is for BO, each sample requires one value iteration in our objective function, and another 1 for value iteration for $\theta_t^V$). This is a clear computational advantage over the EVI in UCLK. 5 REGRET ANALYSIS In this section, we formally present the regret analysis of the VBMLE algorithm. To begin with, we introduce the following useful notations: $$\ell_t(\theta) := \sum_{i=1}^{t-1} \log \langle \phi(s_{i+1}|s_i, a_i), \theta \rangle + \frac{\lambda}{2} \| \theta \|_2^2$$ (11) $$\theta_t^{\text{MLE}} := \argmax_{\theta \in \mathbb{P}} \ell_t(\theta),$$ (12) $$A_t := \sum_{i=1}^{t-1} \phi(s_{i+1}|s_i, a_i)\phi(s_{i+1}|s_i, a_i)^T + \lambda I.$$ (13) If there are multiple maximizers for (12), then one could break the tie arbitrarily. Assumption 1. The following information for the transition probability $P$ is known: • The set of zero transition $P_0 := \{(s, a, s')|P(s'|s, a) = 0, \forall s, s' \in S, a \in A\}$. • The lower bound non-zero transition probabilities $p_{\min} := \min_{(s,a,s') \notin P_0} P(s'|s, a)$. We then redefine the probability simplex based on the above assumption as follows: $$\mathbb{P} := \left\{ \theta \middle| p_{\min} \leq \langle \phi(s'|s, a), \theta \rangle \leq 1, \forall (s, a, s') \notin P_0; \langle \phi(s'|s, a), \theta \rangle = 0, \forall (s, a, s') \in P_0; \sum_{s' \in S} \langle \phi(s'|s, a), \theta \rangle = 1, \forall s \in S, a \in A \right\}.$$ (14) Remark 2. This assumption suggests that the magnitude of the gradient of the log probability for the observed transition, denoted as $\|\nabla_\theta \log \langle \phi(s_{i+1}|s_i, a_i), \theta_t^{\text{MLE}} \rangle \|_2$, is bounded from above. A similar assumption is made in \cite{Kumar1982,Mete2021}. In some scenarios, the knowledge of $p_{\min}$ may not be readily available. To address this, we introduce an alternative version of VBMLE, termed Adaptive VBMLE, to address this issue. This variant employs the following adaptive constraint, resembling a probability simplex, to solve $\theta^V$. $$\mathbb{P}_t := \left\{ \theta \middle| \frac{1}{\log t} \leq \langle \phi(s'|s,a), \theta \rangle \leq 1, \forall (s,a,s') \notin P_0; \langle \phi(s'|s,a), \theta \rangle = 0, \forall (s,a,s') \in P_0; \sum_{s' \in S} \langle \phi(s'|s,a), \theta \rangle = 1, \forall s \in S, a \in A \right\}. \tag{15}$$ The regret bound for this variant is detailed in Theorem 3. ### 5.1 Convergence Analysis of MLE in Linear Mixture MDPs To begin with, we highlight the main technical challenges as follows: A natural idea is to leverage the Azuma–Hoeffding inequality on the log-likelihood ratio: $\ell_t(\theta^{MLE}) - \ell_t(\theta^*)$, and then find the distance between $\theta^{MLE}_t$ and the true parameter $\theta^*$. However, it is known that the stochastic process induced by the maximum log-likelihood ratio is actually a sub-martingale (shown in Lemma 6 in Appendix for completeness). To address these issue, we propose several novel techniques: (i) We first propose to construct a novel super-martingale (cf. Lemma 1) to characterize the convergence rate of the MLE in linear mixture MDPs, which could be of independent interest beyond RL problems. Interestingly, this supermartingale consists of a term that could be interpreted as the regret in the online portfolio selection problem and thereby offers an interesting connection between linear mixture MDPs and online learning. (ii) Built on (i), to utilize Azuma-Hoeffding inequality, we need to carefully handle the sum of squared supermartingale differences, which do not have an explicit uniform upper bound and require a more sophisticated argument. To begin with, we provide several useful definitions as follows. Define the likelihood ratio as $$L_t(\theta) := \prod_{i=1}^{t-1} \frac{\Pr(s_{i+1}|s_i, a_i; \theta)}{\Pr(s_{i+1}|s_i, a_i; \theta^*)} \cdot \exp\left(\frac{\lambda}{2} \| \theta \|_2^2\right). \tag{16}$$ We proceed to construct two useful helper stochastic processes as follows: For each $t \in \mathbb{N}$, $$X_t := \ell_t(\theta^{MLE}_{t-1}) - \ell_t(\theta^*) + \sum_{i=1}^{t-1} z_i,$$ $$z_t := \ell_t(\theta^{MLE}_t) - \ell_t(\theta^{MLE}_{t-1}). \tag{17}$$ **Lemma 1.** For all $\lambda \geq 0$, the stochastic process $\{L_t(\theta^{MLE}_{t-1}) \cdot \prod_{i=1}^{t-1} \exp(-z_i)\}$ is a martingale, i.e., $$\mathbb{E}_{s_{t+1} \sim \Pr(\cdot|s_t, a_t; \theta^*)} \left[ L_{t+1}(\theta^{MLE}_t) \cdot \prod_{i=1}^{t} \exp(-z_i) \middle| \mathcal{F}_t \right] = L_t(\theta^{MLE}_{t-1}) \cdot \prod_{i=1}^{t-1} \exp(-z_i), \tag{19}$$ where $\mathcal{F}_t := \{s_1, a_1, \cdots, s_t, a_t\}$ denotes the causal information up to time $t$. **Corollary 1.** For all $\lambda \geq 0$, the stochastic process $\{X_t\}$ is a supermartingale, i.e., $$\mathbb{E}_{s_{t+1} \sim \Pr(\cdot|s_t, a_t; \theta^*)} \left[ \ell_{t+1}(\theta^{MLE}_t) - \ell_{t+1}(\theta^*) - \sum_{i=1}^{t} z_i \middle| \mathcal{F}_t \right] \leq \ell_t(\theta^{MLE}_{t-1}) - \ell_t(\theta^*) - \sum_{i=1}^{t-1} z_i. \tag{20}$$ This corollary can be proved by applying Jensen’s inequality to (19). Notably, Corollary 1 offers a useful insight that a supermartingale that involves the log-likelihood ratio could still be constructed despite that $\ell_t(\theta^{MLE}) - \ell_t(\theta^*)$ is a submartingale. This result generalizes the classic result in [Kumar & Lin, 1982, Lemma 3] for tabular MDPs to the linear mixture MDP setting, and is also holds for non-regularized MLE ($\lambda = 0$). To establish Theorem 4, we define a useful quantity $\Delta_t$ as $$\Delta_t := \sum_{i=1}^{t-1} z_i = \sum_{i=1}^{t-1} \log \left( \frac{\phi_i(s_{i+1})^\top \theta^{MLE}_t}{\phi_i(s_{i+1})^\top \theta^{MLE}_i} \right), \tag{21}$$ where $\phi_i(s) := \phi(s|s_i, a_i)$ is a shorthand for the feature vector. In the following lemma, we present an upper bound for $\Delta_t$. Recall that $\|\phi(s'|s,a)\|_2 \leq L$, for all $s, s' \in S$ and $a \in A$. Lemma 2. For all \( \lambda \geq 0 \), we have \[ \Delta_t \leq \frac{8d^2}{p_{\text{min}}^2} \log \left( \frac{d\lambda + (t-1)L^2}{d} \right). \] (22) Remark 3 (Connection between linear mixture MDPs and online learning). Through \( \Delta_t \) and Lemma 2, we could build an interesting connection between MLE in linear mixture MDPs and the Follow-the-Leader algorithm (Gaivoronski & Stella [2000]) in online learning. The connection is two-fold: (i) MLE in linear mixture MDPs can be viewed as a variant of online portfolio selection problem: We find that the MLE optimization problem in linear mixture MDPs takes the same form as the classic online portfolio selection problem (Hazan et al. [2016]). Specifically, the feature vectors and the dynamics model parameter in linear mixture MDPs correspond to the price vectors and the asset allocation, respectively. The main difference of the two problems lies in the feasible set and the constraints. (ii) Iterative MLE is equivalent to Follow-the-Leader algorithm: Another interesting connection is that applying MLE in each time step would correspond to the classic Follow-the-Leader algorithm (Gaivoronski & Stella [2000]). Moreover, the term \( \Delta_t \) in (22) could be interpreted as the regret of the Follow-the-Leader algorithm in online learning. With that said, one could verify that Lemma 2 is consistent with the regret quantified in (Gaivoronski & Stella [2000]). Based on the supporting lemmas introduced above, we are ready to formally present the convergence result of MLE in linear mixture MDPs. Theorem 1. With probability at least \( 1 - \delta \), we have \[ \| \theta^* - \theta_t^{\text{MLE}} \|_{A_t}^2 \leq \frac{37d^2}{p_{\text{min}}^2} \cdot \log \left( \frac{d\lambda + tL^2}{d} \right) \cdot \log \frac{1}{\delta}. \] (23) The complete proof of Theorem 1 is provided in Appendix B.1 and here we provide a proof sketch: Proof Sketch. Based on the result in Lemma 1, we can apply Azuma–Hoeffding inequality presented in Lemma 5 to get the high probability bound of log-likelihood ratio. There are two main challenges that need to be handled: (i) The first one is the additional term \( \Delta_t \), we find a connection to the analysis of the online portfolio selection problem and use a similar approach to handle it. (ii) The other one is \( M_t \), which represents the cumulative difference of the super-martingale. We adopt a similar approach by considering a stopping time to ensure that this theorem holds with high probability. 5.2 REGRET BOUND OF VBMLE In this subsection, we formally provide the regret bound of the proposed VBMLE algorithm. Theorem 2. For all linear mixture MDP \( M = (S, A, P, R, T, \mu_0) \), with probability at least \( 1 - \frac{1}{T} - 3\delta \) and choosing \( \alpha(t) = \sqrt{t} \), VBMLE in Algorithm 1 has a regret upper bound as \[ R(T) = O \left( \frac{d\sqrt{T} \log T}{p_{\text{min}}^4 (1 - \gamma)^2} \right). \] (24) The complete proof of Theorem 2 is provided in Appendix B.2 and here we provide a proof sketch: 1. Similar to the analysis of the upper-confidence bound approach, which uses the concentration inequality to replace the term associated with an optimal policy, under VBMLE we can replace \( V^*(s_t, \theta^*) \) by applying the objective function of VBMLE. 2. Then, there are two terms that need to be handled: (i) \( \| \theta_t^{\text{MLE}} - \theta^* \|_{A_t} \) and (ii) \( \| \theta_t^V - \theta^* \|_{A_t} \). We provide a novel theorem of the confidence ellipsoid of the maximum likelihood estimator in linear mixture MDPs in Theorem 1 to deal with (i). 3. In contrast to the regret analysis presented in (Hung et al. [2021]), where the likelihood of an exponential family distribution was considered, analyzing regret in the linear mixture MDP setting is more complex due to the absence of simple closed-form expressions for both \( \theta_t^{\text{MLE}} \) and \( \theta_t^V \). Additionally, in this context, the bias term is not linear with respect to \( \theta \), even if we represent it as \( Q^*(s_t, a_t; \theta) = \sum_{s' \in S} \phi(s'|s_t, a_t)V^*(s_t, \theta), \theta \). To address these challenges, we adopt a novel approach by completing the square of \( \| \theta_t^V - \theta^* \|_{A_t} \) and successfully overcome the problems mentioned above. We also provide the regret analysis for this variant of VBMLE in Appendix D. 6 Numerical Experiments We demonstrate the empirical performance of VBMLE in terms of both regret and computation time in this section. We conduct experiments on a simple environment with discrete state and action spaces. To provide a detailed understanding of how we transition from the tabular MDPs to the linear mixture MDP setting, we have outlined the procedure in the Appendix F.1. The following result includes a comparison between VBMLE, UCLK (Zhou et al., 2021b) and UCLK+ (Zhou et al., 2021a), which are both well-known algorithms used in the context of infinite-horizon linear mixture MDPs. Another baseline algorithm is PSRL (Osband et al., 2013), a popular benchmark method for tabular RL. Details regarding the selected hyperparameters can be found in Appendix F.2. • Empirical Regret: Figure 1 provides the empirical regret of VBMLE, UCLK, UCLK+, and PSRL across various sizes of linear MDPs, and the results demonstrate that the two variants, where VBMLE (TR) is VBMLE with trust-region constrained algorithm for solving $\theta_t^V$ and VBMLE (BO) is VBMLE with GP-UCB for optimization, outperform the other baselines in terms of regret performance. In Figure 1(b), only the results of VBMLE with GP-UCB proposed in Appendix E and PSRL are presented, as the other approaches are intractable for the larger MDP ($|S| = 100$) of the MDP. The result shows that though PSRL has sub-linear regret in small-scale MDP ($|S| = 5$), it has linear regret due to that it does not leverage the structure of linear feature mapping. We also provide the standard deviation of the regret at the final step in Table G.1. VBMLE also has better robustness with an order of magnitude smaller standard deviation than UCLK. • Computation Time: UCLK requires $U|S||A|$ times of solving the constrained quadratic optimization problem per step, where $U$ is the times of value iteration, and VBMLE with BO only requires once $K + 1$ times, where $K$ is the time horizon for BO. Table 1 displays the computation time per step within the same environment as depicted in Figure 1(a). It is evident that the computational complexity of UCLK and UCLK+ render the algorithm impractical in large MDPs. ![Figure 1](image) (a) $|S| = 5$, $|A| = 4$ (b) $|S| = 100$, $|A| = 4$ Figure 1: Regret averaged over 10 trials. Table 1: Computation time per step under different sizes of linear mixture MDPs. | $|S| = 3$, $|A| = 2$ | $|S| = 5$, $|A| = 4$ | $|S| = 15$, $|A| = 4$ | $|S| = 100$, $|A| = 4$ | |---|---|---|---| | VBMLE (TR) | 0.793s | 2.359s | 42.232s | - | | VBMLE (BO) | 2.06s | 2.232s | 3.999s | 19.687s | | UCLK | 3.135s | 49.763s | ≥ 35hr | - | | UCLK+ | 0.741s | 20.128s | ≥ 37hr | - | 7 Conclusion We proposed a provably effective and computationally efficient algorithm for solving linear MDPs, called VBMLE. The sample complexity of the proposed is proved to be upper bounded by $O\left(\frac{d\sqrt{T} \log T}{(p_{\min}^4 (1 - \gamma)^2)}\right)$. The proposed algorithm is different from the existing value-target regression approach and leverages the MLE with value bias to learn the dynamic. We provide a novel theorem to show the confidence ellipsoid of MLE and the simulation result demonstrates the empirical performance of VBMLE. REFERENCES Yasin Abbasi-Yadkori, Dávid Pál, and Csaba Szepesvári. Improved algorithms for linear stochastic bandits. *Advances in neural information processing systems*, 24:2312–2320, 2011. Peter Auer, Thomas Jaksch, and Ronald Ortner. Near-optimal regret bounds for reinforcement learning. *Advances in neural information processing systems*, 21, 2008. Alex Ayoub, Zeyu Jia, Csaba Szepesvari, Mengdi Wang, and Lin Yang. Model-based reinforcement learning with value-targeted regression. In *International Conference on Machine Learning*, pp. 463–474. PMLR, 2020. Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforcement learning. In *International Conference on Machine Learning*, pp. 263–272. PMLR, 2017. Vivek Borkar and P Varaiya. Adaptive control of Markov chains, I: Finite parameter set. *IEEE Transactions on Automatic Control*, 24(6):953–957, 1979. Richard H Byrd, Robert B Schnabel, and Gerald A Shultz. A trust region algorithm for nonlinearly constrained optimization. *SIAM Journal on Numerical Analysis*, 24(5):1152–1170, 1987. Qi Cai, Zhuoran Yang, Chi Jin, and Zhaoran Wang. Provably efficient exploration in policy optimization. In *International Conference on Machine Learning*, pp. 1283–1294. PMLR, 2020. Yuanzhou Chen, Jiafan He, and Quanquan Gu. On the sample complexity of learning infinite-horizon discounted linear kernel mdps. In *International Conference on Machine Learning*, pp. 3149–3183. PMLR, 2022. Alexei A Gaivoronski and Fabio Stella. Stochastic nonstationary optimization for finding universal portfolios. *Annals of Operations Research*, 100:165–188, 2000. Elad Hazan et al. Introduction to online convex optimization. *Foundations and Trends® in Optimization*, 2(3-4):157–325, 2016. Jiafan He, Dongruo Zhou, and Quanquan Gu. Near-optimal policy optimization algorithms for learning adversarial linear mixture MDPs. In *International Conference on Artificial Intelligence and Statistics*, pp. 4259–4280, 2022. Yu-Heng Hung and Ping-Chun Hsieh. Reward-biased maximum likelihood estimation for neural contextual bandits: A distributional learning perspective. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 7944–7952, 2023. Yu-Heng Hung, Ping-Chun Hsieh, Xi Liu, and P. R. Kumar. Reward-biased maximum likelihood estimation for linear stochastic bandits. In *Proceedings of the AAAI Conference on Artificial Intelligence*, pp. 7874–7882, 2021. Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. *Journal of Machine Learning Research*, 11:1563–1600, 2010. Chi Jin, Zhuoran Yang, Zhaoran Wang, and Michael I Jordan. Provably efficient reinforcement learning with linear function approximation. In *Conference on Learning Theory*, pp. 2137–2143, 2020. P Kumar and Woei Lin. Optimal adaptive controllers for unknown markov chains. *IEEE Transactions on Automatic Control*, 27(4):765–774, 1982. PR Kumar and Pravin Varaiya. *Stochastic Systems: Estimation, Identification, and Adaptive Control*, volume 75. SIAM, 2015. H. J. Kushner. A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. *Journal of Basic Engineering*, 86:97–106, 1964. Xi Liu, Ping-Chun Hsieh, Yu Heng Hung, Anirban Bhattacharya, and P. R. Kumar. Exploration Through Reward Biasing: Reward-Biased Maximum Likelihood Estimation for Stochastic Multi-Armed Bandits. In *International Conference on Machine Learning*, pp. 6248–6258. PMLR, 2020.
2JF8mJRJ7M
It looks like capturing correlations between images and each language by calculating their inner product vectors in Equation 8. The meaning of Equation 8 wants to express is to match all the language information one by one or to match the current m-th information?
LIPSUM-FT: ROBUST FINE-TUNING OF ZERO-SHOT MODELS USING RANDOM TEXT GUIDANCE Giung Nam¹ Byeongho Heo² Juho Lee¹,³ ¹KAIST AI ²NAVER AI Lab ³AITRICS {giung, juholee}@kaist.ac.kr, bh.heo@navercorp.com ABSTRACT Large-scale contrastive vision-language pre-trained models provide the zero-shot model achieving competitive performance across a range of image classification tasks without requiring training on downstream data. Recent works have confirmed that while additional fine-tuning of the zero-shot model on the reference data results in enhanced downstream performance, it compromises the model’s robustness against distribution shifts. Our investigation begins by examining the conditions required to achieve the goals of robust fine-tuning, employing descriptions based on feature distortion theory and joint energy-based models. Subsequently, we propose a novel robust fine-tuning algorithm, Lipsum-FT, that effectively utilizes the language modeling aspect of the vision-language pre-trained models. Extensive experiments conducted on distribution shift scenarios in DomainNet and ImageNet confirm the superiority of our proposed Lipsum-FT approach over existing robust fine-tuning methods. 1 INTRODUCTION Recent advances in visual representation learning have been achieved through large-scale contrastive vision-language pre-training (Radford et al., 2021; Jia et al., 2021; Pham et al., 2023). A prominent instance that exemplifies this trend is the Contrastive Language-Image Pre-training (CLIP; Radford et al., 2021), a methodology that leverages natural language supervision to enhance visual representation learning. Capitalizing on its language modeling component, the zero-shot CLIP model, in conjunction with a classification head tailored using class label text for downstream tasks, adeptly conducts image classification tasks without requiring extra optimization specifically on the downstream images. Although the vision-language models achieve remarkable performance without extra training in downstream tasks, fine-tuning on the reference data still proves to be an effective way to improve downstream performance significantly. Nonetheless, this fine-tuning procedure compromises robustness: the accuracy of the fine-tuned model decreases across distribution shifts compared to the accuracy of the initial zero-shot model (Radford et al., 2021; Pham et al., 2023). Thus, it is worth exploring a novel technique for robust fine-tuning that can alleviate the performance trade-off between reference and distribution shift data. While recent works have been conducted in developing robust fine-tuning methods applicable to vision-language models (Wortsman et al., 2022b; Tian et al., 2023; Mao et al., 2023), these approaches tend to neglect the incorporation of the language aspect during the fine-tuning process. Since the nature of vision-language models is their multimodal structure encompassing both images and text, we argue that the existing robust fine-tuning methods do not effectively harness the information embedded within the language modeling aspect. Consequently, we propose a novel strategy for robust fine-tuning that effectively utilizes the language model, aiming to enhance robustness. The main contributions of our work can be outlined as follows: • We re-verified the performance trade-off on the reference and distribution shift data after fine-tuning of CLIP-ViT models (Radford et al., 2021). Our empirical investigation revealed that the recently introduced feature distortion theory (Kumar et al., 2022) lacks Figure 1: Overview. We first verify that standard fine-tuning declines the vision-language connections in the pre-trained CLIP model, as evidenced by changes in the energy function $E_{\theta_0,\phi}$ after $T$ fine-tuning steps, denoted as $\theta_0 \rightarrow \theta_T$ (§ 4.2). Subsequently, we propose a simple yet effective novel robust fine-tuning method, Lipsum-FT, which regularizes the EnergyGap($\theta_T, \theta_0$) easily derived from language model outputs for random texts during fine-tuning (§ 5.1). • In lieu of the feature distortion theory, we introduced a concept of joint energy-based models (Grathwohl et al., 2019) to investigate the underlying factors contributing to the robustness of the existing robust fine-tuning methods. To summarize, our research showed that the fine-tuning process disturbs the connection between the vision and language models, as evidenced by alterations in the energy values. • We propose a novel robust fine-tuning method, Lipsum-FT, tailored for vision-language models. Lipsum-FT utilizes language model outputs to align the fine-tuned model with the zero-shot model. Notably, Lipsum-FT is the pioneering effort that considers the language modeling component of vision-language models for robust fine-tuning. • Figure 1 depicts an overall idea of our proposed Lipsum-FT approach. Through empirical validation using the DomainNet and ImageNet datasets to simulate distribution shift scenarios, we demonstrate that Lipsum-FT outperforms previous methods in terms of both prediction accuracy and uncertainty estimation (§ 5.2). 2 Problem Setting Throughout the paper, we consider image classification tasks using pre-trained vision-language models (Radford et al., 2021; Jia et al., 2021; Pham et al., 2023). To elaborate on our problem setting, we will primarily discuss CLIP (Radford et al., 2021), yet it is worth mentioning that other vision-language models can be described in a similar manner. Zero-shot image classification. CLIP consists of two separate models; (1) a vision model $F_\theta$ parameterized by $\theta$, which maps images into the $D$-dimensional representation space, and (2) a language model $G_\phi$ parameterized by $\phi$, which maps texts into the same representation space. For a $K$-class classification problem taking images $x$, CLIP demonstrates the ability to predict the target label $y$ in a zero-shot manner. Specifically, we can compute the class logits as $u(x) = W F_\theta(x)$, where the classification head weights $W$ is configured by the language model $G_\phi$ and the class label text $t_1, \ldots, t_K$ (e.g., ‘a photo of a class name’), $$W = [G_\phi(t_1) \cdots G_\phi(t_K)]^\top \in \mathbb{R}^{K \times D}. \quad (1)$$ The logits \( u(x) \in \mathbb{R}^K \) for input \( x \) are transformed into a \( K \)-class probability vector using the softmax function, denoted as \( p(y|x) = \text{Softmax}(u(x)) \). Namely, \[ p(y=k|x) = \text{Softmax}^{(k)}(u(x)) = \frac{\exp(u^{(k)}(x))}{\sum_{j=1}^{K} \exp(u^{(j)}(x))}, \quad \text{for } k = 1, \ldots, K. \] **Fine-tuning of the zero-shot model.** The zero-shot CLIP model demonstrates competitive performance in image classification tasks. However, fine-tuning the zero-shot model can further improve the downstream performance of the zero-shot model. A conventional fine-tuning procedure involves simultaneously updating the vision model parameters \( \theta \) and the downstream classification head \( W \) to minimize the cross-entropy loss function, \[ L_{CE}(\theta, W) = - \sum_{(x,k) \in D} \log (\text{Softmax}^{(k)}(WF_\theta(x))) \] where \( x \) and \( k \) respectively are an image and its corresponding label index in the training dataset \( D \). We refer readers to Appendix B.1 for a more detailed description. ### 3 RELATED WORK This section mainly focuses on the works that follow the advent of the large-scale vision-language model (Radford et al., 2021). Appendix A provides further discussion of previous work in transfer learning, out-of-distribution generalization, and other relevant areas. **Robust fine-tuning of vision-language models.** In line with the recent progress in large-scale vision-language models, several works have recently delved into the realm of robust fine-tuning of these models: Wortsman et al. (2022b) demonstrated that a simple weight averaging strategy, which ensembles the weights of the zero-shot and fine-tuned models, effectively enhances robustness; Tian et al. (2023) proposed a method where the fine-tuned weights are projected into the neighboring region of the zero-shot model using different projection radii for each layer. In a closely related work to ours, Mao et al. (2023) utilized context information derived from the language model. **Theory of transfer learning.** While much of the current research on transfer learning focuses primarily on linear probing due to the complexity involved in analyzing the fine-tuning process (Tripuraneni et al., 2020; Du et al., 2021), a recent investigation by Kumar et al. (2022) introduced the concept termed the feature distortion theory. This theory proposes that fine-tuning could potentially distort the features learned during pre-training, potentially resulting in decreased performance on data that falls outside the distribution of the training data. However, as real-world scenarios may not always align with theoretical assumptions, the feature distortion theory alone does not provide a comprehensive explanation for robustness in practical applications (Trivedi et al., 2023). ### 4 UNDERSTANDING FINE-TUNING OF ZERO-SHOT MODELS This section examines the implications of fine-tuning for zero-shot vision-language models on vision tasks, particularly concerning distribution shifts. To this end, we present an extensive array of experimental outcomes across the following two scenarios using CLIP-ViT: (1) **DomainNet**, where DomainNet-R is employed as the reference training data, and evaluation encompasses four distribution shifts DomainNet-\{P, C, I, S\}; (2) **ImageNet**, where ImageNet serves as the reference training data, and evaluation entails four distribution shifts ImageNet-\{V2, R, A, S\}. Please refer to Appendix C for more details on these datasets. #### 4.1 STANDARD FINE-TUNING It is common practice to initialize a downstream classification head with zero-shot weights derived from the language model when fine-tuning vision-language models (Wortsman et al., 2022b; Tian et al., 2023; Mao et al., 2023). We denote this standard fine-tuning approach as \( FT \) throughout the paper and refer readers to Appendix B.1 for supplementary fine-tuning results using alternative head initialization strategies. Figure 2: Trade-off plots on DomainNet. In plots, the vertical axis shows accuracy on the reference data, whereas the horizontal axis indicates average accuracy on distribution shifts, i.e., the top-right represents a better case. The star markers ⋆ correspond to the zero-shot model, while the square markers □ denote the model fine-tuned with 5000 training steps and a learning rate of $1 \times 10^{-5}$. **Left:** The number of training steps is kept at 5000, while the learning rate varies as $1 \times 10^{-6}$, $3 \times 10^{-6}$, $1 \times 10^{-5}$, and $3 \times 10^{-5}$ (the leftmost point denotes $3 \times 10^{-5}$). **Right:** The learning rate is constant at $1 \times 10^{-5}$, while the number of training steps is altered to 1000, 3000, 5000, and 10000 (the leftmost point denotes 10000). Refer to Figure 7 for the results on ImageNet. Decrease in performance on distribution shifts. Radford et al. (2021) observed that fine-tuning CLIP-ViT models leads to improved accuracy on the source distribution, but it also results in a reduction in accuracy on distribution shifts. Figure 2 verifies this trade-off, where more fine-tuning of zero-shot models (represented by star markers ⋆) through higher learning rates or more training steps leads to enhanced performance on the reference data (moving upwards), but it also corresponds to a decrease in performance on distribution shifts (moving to the left). Examining the feature distortion theory. We first investigate this phenomenon using the feature distortion theory (Kumar et al., 2022). According to the feature distortion theory, fine-tuning mainly modifies features for the reference data rather than for distribution shift data. Kumar et al. (2022) assert that this is the underlying reason for the performance trade-off observed between reference and distribution shift data and empirically validate it in the context of fine-tuning ResNet-50 that is pre-trained using MoCo v2 (Chen et al., 2020c). They further demonstrate that LP−FT, performing fine-tuning after linear-probing, effectively alleviates feature distortion, resulting in enhanced performance on distribution shifts. However, our findings demonstrate that fine-tuning of CLIP-ViT models does not exhibit greater feature distortion in reference data compared to distribution shift data. In Figure 3, we computed the Euclidean distance between features before and after the fine-tuning procedure to quantitatively measure the extent of distortion (y-axis), in line with the experimental design presented by Kumar et al. (2022). Surprisingly, even in the DomainNet scenario, the features on distribution shift data exhibit larger distortion than those on reference data, contradicting the feature distortion theory. This can be attributed to the notion that the assumptions made in the theory, including 1) the over-parameterized linear network assumption and 2) the rigorous out-of-distribution assumption about the distribution shift data being orthogonal to the reference data’s row space, might not align with real-world practical situations. For a more comprehensive understanding of the results and further verification of LP−FT, we refer readers to Appendix B.1. 4.2 Regularization towards Zero-shot for Robust Fine-tuning Differentiating pre-trained vision-language models from vision-only models, e.g., Caron et al. (2021); Chen et al. (2021); Bao et al. (2022); He et al. (2022), is their ability in zero-shot classification utilizing the language model. Notably, the zero-shot model without any training already demonstrates superior performance in handling shifts in data distribution, as illustrated earlier in Figure 2. Consequently, the primary task in achieving robust fine-tuning of vision-language models is maintaining their zero-shot performance on distribution shift data (Wortsman et al., 2022b). In this section, we delve into a comparative analysis between the existing techniques for achieving robust fine-tuning and the conventional standard fine-tuning approach, denoted as FT. To this end, Figure 3: Bar plots depicting the feature distortion on DomainNet. A taller bar represents a larger degree of feature distortion after fine-tuning. The number on the bar denotes the relative difference in distortion values between reference and distribution shift data. **Left:** The number of training steps is kept at 5000, while the learning rate varies as $1e^{-06}$, $3e^{-06}$, $1e^{-05}$, and $3e^{-05}$. **Right:** The learning rate is constant at $1e^{-05}$, while the number of training steps is altered to 1000, 3000, 5000, and 10000. These plots are with B/16 on DomainNet, and refer to Figures 9 and 10 for the results with B/32, B/16, and L/14 on DomainNet and ImageNet. We explore the following three groups of existing algorithms that can serve as methods for achieving robust fine-tuning of zero-shot models: (1) EMA (Exponential Moving Average) and L2SP (Xuhong et al., 2018) apply regularization to steer the fine-tuned solution towards the zero-shot model in the weight space throughout the fine-tuning process; (2) KD (Hinton et al., 2015) and CAR-FT (Mao et al., 2023) utilize regularization to guide the fine-tuned solution towards the zero-shot model in the output space throughout the fine-tuning procedure; (3) WiSE (Wortsman et al., 2022b) and TPGM (Tian et al., 2023) are post-hoc approaches that obtain robust solutions by adjusting the fine-tuned model to be positioned close to the zero-shot model in the weight space. All experiments are with B/16 architecture, and we refer readers to Appendix B.2 for more details on these methods. **Interpretation via joint energy-based model.** In the pursuit of developing novel robust fine-tuning methods, it becomes essential to consider the following questions: What exactly differentiate the existing robust fine-tuning approaches from standard fine-tuning, and what particular outcomes are realized by employing regularization towards the zero-shot model? To explore this further, we perceive the zero-shot and fine-tuned models through the lens of a joint energy-based model (Grathwohl et al., 2019), which reveals the inherent capacity for generative modeling within conventional discriminative models. To be specific, pre-trained vision-language models offer discriminative modeling for image data $x$ and text data $t$, $$p_{\theta,\phi}(t|x) = \frac{\exp(f_{\theta,\phi}(x,t))}{\sum_{t'} \exp(f_{\theta,\phi}(x,t'))}, \text{ where } f_{\theta,\phi}(x,t) = \langle F_{\theta}(x), G_{\phi}(t) \rangle,$$ and an energy-based model for the joint distribution of $x$ and $t$ can be defined as $$p_{\theta,\phi}(x,t) = \frac{\exp(-E_{\theta,\phi}(x,t))}{Z(\theta,\phi)} = \frac{\exp(f_{\theta,\phi}(x,t))}{Z(\theta,\phi)},$$ where the energy function is given as $E_{\theta,\phi}(x,t) = -f_{\theta,\phi}(x,t)$ and $Z(\theta,\phi)$ is an unknown normalizing constant. That is, the inner product value of $f_{\theta,\phi}(x,t)$ utilized as class logits in discriminative modeling (as described in Eq. 4) can be reused to define the energy function $E_{\theta,\phi}(x,t)$. Originally, the vision model $F_{\theta_0}$ before fine-tuning is strongly associated with the language model $G_{\phi}$ through vision-language contrastive pre-training to the extent that it enables zero-shot discriminative modeling. However, after fine-tuning, the representations of the vision model undergo distortion, resulting in a weakening of the connection between the vision and language models. More precisely, we confirm this through an increase in the energy gap, defined as follows: $$\text{EnergyGap}(\theta,\theta_0) = \mathbb{E}_x \mathbb{E}_t \left( E_{\theta,\phi}(x,t) - E_{\theta_0,\phi}(x,t) \right)^2.$$ Here, we compute the expectation involving $t$ by employing a set of 10,000 text tokens, each having a sequence length of 8. These tokens are generated randomly from the vocabulary of the pre-trained language model. We hypothesize that the existing robust fine-tuning techniques alleviate the decline in pre-trained associations between vision and language models by indirectly reducing the energy gap during their regularization procedure towards the zero-shot model. Figure 4: Energy gaps and distribution shift accuracy on DomainNet. It represents the energy gap (y-axis) and the relative accuracy of distribution shift data to the reference accuracy (x-axis). The model attained through the fine-tuning procedure is represented by square markers, while the model obtained through post-hoc approaches combining the fine-tuned and zero-shot models is denoted by diamond-shaped markers (i.e., WiSE and TPGM). The dashed line depicts a linear trend, and it comes with specific details provided as the Pearson correlation coefficient, denoted as PCC. We refer readers to Figure 13 for the results on ImageNet. Figure 4 provides clear evidence for our hypothesis regarding the energy gap minimization aspect in a range of robust fine-tuning approaches. We draw scatter plots illustrating the relative accuracy for distribution shifts (i.e., the distribution shift accuracy divided by the reference accuracy) in connection with the energy gap. The Pearson Correlation Coefficient (PCC) values in each subplot further clarify a distinct correlation between the energy gap and the robustness to distribution shifts. Notably, our method, labeled Lipsum-FT and elaborated upon in § 5, effectively leverages this connection and achieves superior performance in addressing distribution shift data. 5 Lipsum-FT: Fine-tuning with Random Text Guidance In this section, we propose a novel method for robust fine-tuning called Lipsum-FT, tailored to enhance the robustness of vision-language models after fine-tuning. Briefly speaking, the core concept of our approach is to minimize the change in energy during the fine-tuning process in the context of discriminative modeling with the language model. 5.1 Lipsum-FT Alongside the cross-entropy loss, which is a main objective of the fine-tuning procedure, our proposed Lipsum-FT method also minimizes the following regularization term, $$\hat{R}(\theta) = \frac{1}{2M} \| v_{\theta,\phi}(x) - v_{\theta_0,\phi}(x) \|_2^2,$$ where $v_{\theta,\phi}(x)$ is computed using the language model $G_\phi$ and $M$ text tokens $t_1, \ldots, t_M$, $$v^{(m)}_{\theta,\phi}(x) = \langle G_\phi(t_m), F_\theta(x) \rangle, \text{ for } m = 1, \ldots, M.$$ The name Lipsum-FT comes from the process of generating text tokens $t_1, \ldots, t_M$ in a random manner using the vocabulary of the pre-trained vision-language model. It is worth mentioning that the minimization of the regularization term described in Equation 7 is a stochastic way to minimize the energy gap defined before in Equation 6, which involves $M$ samples for every fine-tuning iteration. Here, we have defined our regularization term in the logit matching (Ba & Caruana, 2014) format to facilitate a comparison with the closely related CAR-FT. More detailed discussions regarding our proposed Lipsum-FT regularization will be provided in the forthcoming § 5.3. 5.2 Evaluation Results To begin, we present the evaluation results clearly showing the efficacy of our approach compared to the baseline methods. For a more comprehensive view of the results, please see Appendix B.3. Table 1: Main results on distribution shift tasks. It summarizes the accuracy of various methods for $B/16$ in distribution shift scenarios on DomainNet and ImageNet. An underline highlights the top two values in each column, where all values are averaged from five measurements. See Table 5 for the results with $B/32$ and $L/14$. | Method | DomainNet | ImageNet | |--------------|-----------|----------| | | R | P | C | I | S | IN | V2 | R | A | S | | Zero-shot | 82.5 | 66.7 | 65.6 | 43.8 | 56.9 | 68.2 | 62.0 | 76.3 | 52.1 | 46.7 | | FT | 88.5±0.1 | 62.5±0.2 | 64.3±0.4 | 41.2±0.5 | 50.6±0.5 | 82.8±0.1 | 72.6±0.3 | 68.5±0.3 | 39.2±0.3 | 48.0±0.2 | | EMA | 88.9±0.0 | 65.5±0.4 | 67.3±0.6 | 44.4±0.6 | 54.8±0.8 | 83.0±0.0 | 72.6±0.2 | 70.2±0.3 | 39.6±0.3 | 49.4±0.3 | | L2SP | 88.6±0.1 | 63.2±0.2 | 65.0±0.2 | 32.0±0.2 | 51.4±0.7 | 82.9±0.1 | 72.6±0.2 | 68.8±0.2 | 39.7±0.2 | 48.2±0.1 | | KD | 88.6±0.1 | 64.2±0.2 | 65.7±0.3 | 42.9±0.2 | 52.4±0.4 | 83.1±0.1 | 73.1±0.3 | 72.9±0.1 | 42.3±0.4 | 49.9±0.2 | | CAR–FT | 88.9±0.0 | 64.4±0.1 | 65.8±0.2 | 43.3±0.2 | 53.2±0.5 | 83.2±0.0 | 73.0±0.2 | 71.3±0.3 | 43.7±0.2 | 49.5±0.2 | | Lipsum–FT | 89.0±0.0 | 66.3±0.2 | 68.0±0.0 | 46.0±0.2 | 56.2±0.2 | 83.3±0.0 | 73.6±0.1 | 75.9±0.1 | 49.9±0.3 | 51.4±0.1 | Table 2: Uncertainty quantification results on distribution shift tasks. It summarizes the uncertainty metrics, including (a) expected calibration error and (b) negative log-likelihood, of various methods for $B/16$ in distribution shift scenarios on DomainNet and ImageNet. All fine-tuning results are averaged from five measurements, and the corresponding standard deviation values are provided. An underline highlights the top two values in each column. (a) Expected calibration error (lower is better). | Method | DomainNet | ImageNet | |--------------|-----------|----------| | | R | P | C | I | S | IN | V2 | R | A | S | | Zero-shot | 0.025 | 0.020 | 0.028 | 0.107 | 0.021 | 0.019 | 0.023 | 0.039 | 0.076 | 0.045 | | FT | 0.024±0.001 | 0.123±0.001 | 0.122±0.001 | 0.204±0.004 | 0.184±0.004 | 0.034±0.001 | 0.075±0.001 | 0.066±0.002 | 0.234±0.002 | 0.159±0.002 | | EMA | 0.018±0.002 | 0.104±0.005 | 0.101±0.009 | 0.196±0.006 | 0.151±0.011 | 0.027±0.002 | 0.068±0.003 | 0.050±0.003 | 0.220±0.004 | 0.142±0.004 | | L2SP | 0.024±0.001 | 0.123±0.002 | 0.124±0.002 | 0.204±0.003 | 0.183±0.004 | 0.033±0.001 | 0.074±0.002 | 0.063±0.001 | 0.228±0.002 | 0.158±0.002 | | KD | 0.006±0.001 | 0.078±0.003 | 0.073±0.003 | 0.170±0.005 | 0.123±0.002 | 0.007±0.001 | 0.029±0.002 | 0.009±0.001 | 0.169±0.004 | 0.083±0.002 | | CAR–FT | 0.018±0.001 | 0.110±0.001 | 0.107±0.001 | 0.202±0.004 | 0.156±0.003 | 0.026±0.001 | 0.066±0.002 | 0.045±0.001 | 0.197±0.003 | 0.141±0.001 | | Lipsum–FT | 0.007±0.000 | 0.055±0.002 | 0.043±0.001 | 0.146±0.001 | 0.081±0.001 | 0.008±0.001 | 0.034±0.001 | 0.012±0.002 | 0.122±0.004 | 0.088±0.001 | (b) Negative log-likelihood (lower is better). | Method | DomainNet | ImageNet | |--------------|-----------|----------| | | R | P | C | I | S | IN | V2 | R | A | S | | Zero-shot | 0.731 | 1.497 | 1.527 | 3.071 | 2.047 | 1.184 | 1.490 | 0.927 | 1.925 | 2.245 | | FT | 0.445±0.003 | 1.842±0.003 | 1.748±0.002 | 3.641±0.003 | 2.818±0.004 | 0.632±0.002 | 1.129±0.002 | 1.439±0.003 | 2.765±0.007 | 2.559±0.009 | | EMA | 0.425±0.002 | 1.673±0.004 | 1.540±0.009 | 3.435±0.006 | 2.466±0.005 | 0.608±0.002 | 0.898±0.009 | 1.330±0.002 | 2.660±0.007 | 2.399±0.008 | | L2SP | 0.447±0.002 | 1.846±0.007 | 1.736±0.013 | 3.635±0.010 | 2.806±0.043 | 0.621±0.002 | 1.126±0.002 | 1.429±0.003 | 2.727±0.012 | 2.546±0.019 | | KD | 0.435±0.002 | 1.683±0.010 | 1.575±0.007 | 3.427±0.022 | 2.511±0.030 | 0.601±0.002 | 1.047±0.003 | 1.137±0.007 | 2.407±0.009 | 2.257±0.022 | | CAR–FT | 0.423±0.001 | 1.732±0.013 | 1.624±0.013 | 3.505±0.024 | 2.579±0.035 | 0.599±0.001 | 1.087±0.005 | 1.261±0.011 | 2.424±0.014 | 2.381±0.018 | | Lipsum–FT | 0.425±0.001 | 1.521±0.005 | 1.400±0.005 | 3.079±0.011 | 2.150±0.011 | 0.595±0.001 | 1.010±0.003 | 0.973±0.004 | 1.986±0.010 | 2.123±0.004 | Evaluation results for classification accuracy. Table 1 presents comparative results in the DomainNet and ImageNet distribution shift scenarios for $B/16$. It clearly shows that Lipsum–FT surpasses all other fine-tuning baselines across all types of shifts. Table 5 in Appendix B.3 provides additional confirmation that this excellence extends beyond the $B/16$ architecture and it also observed in the case of the $B/32$ and $L/14$ architectures. Evaluation results for predictive uncertainty. While classification accuracy serves as the primary metric for assessing image classification models, it does not offer insights into the model’s ability in uncertainty quantification when processing distribution shift data. Considering the tendency of classification models to make incorrect predictions when faced with distribution shift data (as demonstrated by lower distribution shift accuracy compared to reference accuracy), evaluating the model’s capacity to quantify predictive uncertainty becomes a notable concern. To account for this, we also present the evaluation results using the following two prominent uncertainty metrics (Guo et al., 2017): (a) Expected calibration error (Pakdaman Naeini et al., 2015), which quantifies calibration by calculating the overall disparity between prediction accuracy and confidence. (b) Negative log-likelihood, which directly assesses the likelihood of predictive cate- Figure 5: Scatter plots for post-hoc methods on DomainNet. Moving in an upward and rightward direction signifies improved accuracy for the reference and distribution shift data, respectively, making the top right corner more desirable. We refer readers to Figure 14 for ImageNet results, as well as Tables 6 and 7 for numerical results. Figure 6: Scatter plots for ensemble and soup methods. Moving in an upward and rightward direction signifies improved accuracy for the reference and distribution shift data, respectively, making the top right corner more desirable. Small markers depict a set of fine-tuned models with varying hyperparameter configurations. Within this collection, Soup ingredients are identified by a triangular marker, while Ensemble members are distinguished by an inverted triangular marker. Table 2 shows that our suggested Lipsum-FT approach attains lower numbers in these metrics, indicating its superiority in uncertainty quantification. 5.3 Discussions Combining with post-hoc methods. Given that the WiSE and TPGM techniques belong to the class of post-hoc methods, which involves merging zero-shot and fine-tuned models, they have the potential to work in conjunction with our proposed Lipsum-FT. Accordingly, we further present the results for comparison and combination of Lipsum-FT with WiSE and TPGM-C in Figure 5; TPGM-C is a variant of TPGM introduced by Tian et al. (2023) for a fair comparison with WiSE. Please refer to Appendix B.2 for more details on these methods. Figure 5 clearly shows that Lipsum-FT is positioned in a favorable region at the upper right, demonstrating its superiority in both the reference data (upper) and distribution shift data (right). Furthermore, the plots labeled as Lipsum-FT + WiSE and Lipsum-FT + TPGM-C illustrate that the existing post-hoc methodologies WiSE and TPGM can operate independently to improve the distribution shift performance in conjunction with our newly introduced method, Lipsum-FT. Combining with ensemble and soup methods. Another strategy acknowledged for its ability to withstand distribution shifts is the concept of model soup (Wortsman et al., 2022a), where the weights of multiple models fine-tuned with different hyperparameter configurations are averaged. In this context, we also present the results comparing and combining Lipsum-FT with Soup and Ensemble in Figure 6. Soup and Ensemble represent weight-space and output-space ensembles, respectively, using the following greedy procedure described in Wortsman et al. (2022a): (1) Table 3: Ablation results on loss function and text guidance. It summarizes the accuracy of methods located within the range from CAR–FT to Lipsum–FT. The values enclosed in brackets indicate the difference in performance when compared to CAR–FT, where all values are averaged from five measurements. An underline highlights the top two values in each column. Please refer to Table 9 for ImageNet results, including standard deviations. | Method | Loss function | Text guidance | DomainNet | |--------------|---------------|---------------|-----------| | | KLD → MSE | fixed → random | R | P | C | I | S | | CAR–FT | | | 88.9 | 64.4 | 65.8 | 43.3 | 53.2 | | CAR–FT<sub>MSE</sub> | ✓ | | 88.9 | 65.7 (+1.3) | 67.1 (+1.3) | 44.5 (+1.2) | 55.2 (+2.0) | | Lipsum–FT<sub>KLD</sub> | ✓ | ✓ | 89.0 (+0.1) | 66.0 (+1.6) | 67.7 (+1.9) | 45.8 (+2.5) | 55.7 (+2.5) | | Lipsum–FT | ✓ | ✓ | 89.0 (+0.1) | 66.3 (+1.9) | 68.0 (+2.2) | 46.0 (+2.7) | 56.2 (+3.0) | arranging the fine-tuned solutions in order of their validation accuracy, and (2) gradually expanding the pool of candidates for ensembling in a greedy manner to improve the validation accuracy. Figure 6 illustrates that when our approach is combined with the model soup method, denoted as Lipsum–FT + Soup, it results in improvements in both reference and distribution shift performance, positioning it favorably in the upper right region. It is worth emphasizing that the soup models outperform the ensemble models, which come with an extra inference cost (approximately four times more, as we constrained the maximum ensemble size to four). These results align with the observations reported by Wortsman et al. (2022a) and demonstrate that our Lipsum–FT approach functions independently from the soup method. Further comparison with Mao et al. (2023). In a related study conducted by Mao et al. (2023), which introduced the CAR–FT method, they empirically investigated the impact of fine-tuning on the context awareness of pre-trained CLIP features. Their findings align with our central idea that fine-tuning disrupts the pre-trained connection between vision and language models. However, our research extends beyond context awareness, encompassing a broader spectrum of general relationships for all potential discriminative models derived from pre-trained vision-language models. We enhance our understanding of the critical factors behind Lipsum–FT’s success by conducting additional ablation study involving a comparative analysis between CAR–FT and Lipsum–FT. More precisely, the points mentioned above lead to the following methodological distinctions between CAR–FT and our proposed Lipsum–FT: (1) CAR–FT utilizes the Kullback-Leibler Divergence (KLD) loss for handling categorical outputs in the context of a discriminative model for context classification. In contrast, Lipsum–FT adopts the Mean Squared Error (MSE) loss for energies (i.e., logits) in the context of an arbitrary discriminative model. (2) CAR–FT employs a fixed text guidance to explicitly address context awareness, whereas Lipsum–FT relies on a random text guidance strategy covering all potential aspects inherent in pre-trained vision-language models. Table 3 conducts an ablation analysis on these two factors, illustrating the roles played by both the MSE loss and the random text guidance strategy in enhancing performance. For a more detailed implementation and a connection between MSE and KLD losses, refer to Appendix B.3. 6 CONCLUSION In this study, we introduced Lipsum–FT, a straightforward yet powerful method for enhancing the distribution shift robustness of fine-tuned vision-language models. Lipsum–FT goes beyond the conventional notion that fine-tuning distorts the features learned during pre-training; it advances the concept in the context of pre-trained vision-language models that fine-tuning can disrupt the pre-trained connections between vision and language models. From this, Lipsum–FT minimizes the energy gap across all potential discriminative models derived from the pre-trained vision-language model. The extensive experimental results strongly support these claims, and Lipsum–FT has also demonstrated its capability to enhance other methods using weight averaging strategies. This research represents pioneering efforts in involving the language model aspect into the fine-tuning procedure for vision-language models. An intriguing avenue for future research is investigating a novel text guidance strategy tailored for robust fine-tuning instead of random text guidance. Ethics statement. In this work, we utilized CLIP, a notable contemporary example of large-scale pre-trained models. Such large models potentially raise ethical concerns due to unfiltered data during their pre-training phase (Weidinger et al., 2021). Nevertheless, it is worth clarifying that this work is fundamentally an analytical paper and does not inherently carry significant ethical risks. Reproducibility statement. Appendix C provides specific information about the experiments, and we plan to release the experiment code accessible to the public to facilitate reproducibility. ACKNOWLEDGEMENT This work was partly supported by Institute of Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIT)(No.2019-0-00075, Artificial Intelligence Graduate School Program(KAIST)), the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (NRF-2022R1A5A708390812, NRF-2021M3E5D9025030), and KAIST-NAVER Hypercreative AI Center. This material is based upon work supported by the Google Cloud Research Credits program with the award GCP19980904 and Cloud TPUs from Google’s TPU Research Cloud (TRC). REFERENCES Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernando Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Advances in Neural Information Processing Systems (NIPS), 2014. Igor Babuschkin, Kate Baumli, Alison Bell, Surya Bhupatiraju, Jake Bruce, Peter Buchlovsky, David Budden, Trevor Cai, Aidan Clark, Ivo Danihelka, Antoine Dedieu, Claudio Fantacci, Jonathan Godwin, Chris Jones, Ross Hemsley, Tom Hennigan, Matteo Hessel, Shaobo Hou, Steven Kapturowski, Thomas Keck, Iurii Kemaev, Michael King, Markus Kunesch, Lena Martens, Hamza Merzic, Vladimir Mikulik, Tamara Norman, George Papamakarios, John Quan, Roman Ring, Francisco Ruiz, Alvaro Sanchez, Rosalia Schneider, Eren Sezener, Stephen Spencer, Srivatsan Srinivasan, Wojciech Stokowiec, Luyu Wang, Guangyao Zhou, and Fabio Viola. The DeepMind JAX Ecosystem, 2020. URL http://github.com/deepmind. Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. BEit: BERT pre-training of image transformers. In International Conference on Learning Representations (ICLR), 2022. Abhijit Bendale and Terrance Boult. Towards open world recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. Abhijit Bendale and Terrance Boult. Towards open set deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. Gilles Blanchard, Gyemin Lee, and Clayton Scott. Generalizing from several related classification tasks to a new unlabeled sample. In Advances in Neural Information Processing Systems (NIPS), 2011. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
2OwSqvxjP2
The author's proposal to generate ground-truth labels using a mixup-based method raises a valid concern about the dataset's stability during training. It's essential to verify whether the constructed labeled dataset remains invariant throughout the training process.
Boosting Semi-Supervised Learning via Variational Confidence Calibration and Unlabeled Sample Elimination Anonymous authors Paper under double-blind review Abstract Despite the recent progress of Semi-supervised Learning (SSL), we argue that the existing methods may not employ unlabeled examples effectively and efficiently. Many pseudo-label-based methods select unlabeled examples into the training stage based on the inaccurate confidence scores provided by the output layer of the classifier network. Additionally, most prior work typically adopts all the available unlabeled examples without data pruning, which is incapable of learning from massive unlabeled data. To address these issues, this paper proposes two methods called VCC (Variational Confidence Calibration) and INFUSE (INfluence-Function-based Unlabeled Sample Elimination). VCC is a general-purpose plugin of confidence calibration for SSL. By approximating the calibrated confidence through three types of consistency scores, a variational autoencoder is leveraged to reconstruct the confidence score for selecting more accurate pseudo-labels. Based on the influence function, INFUSE is a data pruning method for constructing a core dataset of unlabeled examples. The effectiveness of our methods is demonstrated through experiments on multiple datasets and in various settings. For example, on the CIFAR-100 dataset with 400 labeled examples, VCC reduces the classification error rate of FixMatch from 46.47% to 43.31% (with improvement of 3.16%). On the SVHN dataset with 250 labeled examples, INFUSE achieves 2.61% error rate using only 10% unlabeled data, which is better than RETRIEVE (2.90%) and the baseline with full unlabeled data (3.80%). Putting all the pieces together, the combined VCC-INFUSE plugins can reduce the error rate of FlexMatch from 26.49% to 25.41% on the CIFAR100 dataset (with improvement of 1.08%) while saving nearly half of the original training time (from 223.96 GPU hours to 115.47 GPU hours). 1 Introduction Deep neural networks have become the foundation of many fields in machine learning. The success of neural networks can be partially attributed to the existence of large-scale datasets with annotations such as ImageNet (Deng et al., 2009) and COCO (Lin et al., 2014). However, collecting and labeling a huge amount of data is time-consuming and laborious. Besides, the potential privacy issues are also obstacles to data labeling. In contrast, collecting unlabeled data is cheaper and easier for most tasks. To mitigate the need for labeled examples, semi-supervised learning (SSL) has been a hot topic in recent years for leveraging cheap and large-volume unlabeled examples. One commonly used approach is Pseudo-labeling, which produces artificial pseudo-labels for unlabeled data. For example, FixMatch (Sohn et al., 2020) is one of the most popular such methods. In FixMatch, the unlabeled data is first fed to the model to get the prediction, followed by a selection module with a fixed threshold to select pseudo labels. Unlabeled data points whose confidence scores are greater than the threshold would be chosen for training, while others are simply ignored. By denoting $\tau$ as the fixed threshold, $c_i$ (or $\tilde{c}_i$) as the confidence distribution predictions of weakly (or strongly) augmented version of example $i$, respectively, and $\hat{c}_i = \arg\max(c_i)$ as the predicted class... label of weakly augmented example, the loss on unlabeled data can be formulated as Eq. (1) $$L_{unlab} = \sum_i 1(\max(c_i) \geq \tau)L(\hat{c}_i, \bar{c}_i),$$ where $L(\hat{c}_i, \bar{c}_i)$ is the loss between a class label and a confidence distribution. Although FixMatch has become the foundation of many state-of-the-art SSL methods (Zhang et al., 2021; Zheng et al., 2022), we argue that it may fail to use unlabeled examples effectively and efficiently. (1) Incorrect pseudo labels caused by calibration error. A well-calibrated model is expected to be desirable if the predicted confidence score really reflects the probability of classifying the example correctly. However, according to Guo et al. (2017), most networks suffer from calibration error problems, such that the models become over-confident or under-confident. Hence, the confidence score cannot correctly indicate the chance that the example is correctly classified. The previous methods based on the confidence score can sometimes generate wrong pseudo labels, leading to the performance degeneration problem. Hence FixMatch-like methods appear to be unreliable. (2) Huge computation cost in training. The SSL model is required to forward propagate over the whole dataset to compute confidence scores for pseudo-label selection. Due to the great amount of unlabeled data, this step would be extremely time-consuming. However, not all unlabeled data is helpful to the model’s decision boundary. For example, some data points can be too easy to provide meaningful gradients, while some can be too difficult for the model to select and learn at this stage. We argue that the unlabeled training set should be dynamically pruned, so as to reduce computation cost and speed up convergence. To address the first issue, we propose Variational Confidence Calibration (VCC), a variational method to obtain the calibrated confidence scores for pseudo-label selection. The well-calibrated confidence score is expected to be closer to the ground-truth probability that an example is correctly predicted, providing a better reference in selecting pseudo-labeled examples. Although confidence calibration is a well-studied problem in fully-supervised setting, we argue it would be more challenging in SSL due to the absence of ground-truth labels. To bypass this difficulty, we employ three consistency scores to measure the stability of prediction. By simultaneously considering the stability and confidence of the prediction, we can approximate the calibrated confidence scores. Furthermore, the variational autoencoder is used to provide more stable results by reconstructing the calibrated confidences. To address the second issue, we propose the INFluence Function-based Unlabeled Sample Elimination (INFUSE) method. INFUSE uses the influence function Koh & Liang (2017) to compute the importance of each unlabeled example. By dynamically preserving the data points with the highest importance, the unlabeled core set can be built for replacing the whole dataset. On this small-scale core set, the model is expected to converge faster so that the computation cost at the training stage can be reduced. By combining two together, the finalized VCC-INFUSE method achieves higher prediction accuracy with lower training costs. In summary, this paper makes the following contributions: 1. We propose the VCC method. By generating a well-calibrated confidence score, VCC can bring more accurate pseudo labels and improve the model’s accuracy. As a pluggable module, VCC can be combined with existing SSL methods flexibly. 2. We propose the INFUSE method. INFUSE can dynamically prune unimportant unlabeled examples, in order to speed up the convergence and reduce the computation costs in training. 3. The effectiveness of our methods is demonstrated on multiple datasets and in various settings. 2 RELATED WORK Semi-Supervised Learning. FixMatch (Sohn et al., 2020) is one of the most popular SSL methods. In FixMatch, the weakly-augmented unlabeled example is first fed to the model to obtain the one-hot pseudo-label. Then the model is trained with the strongly-augmented example and required to produce predictions consistent with the pseudo-label. FlexMatch (Zhang et al., 2021) further proposes an adaptive threshold strategy corresponding to the different learning stages and categories. SimMatch (Zheng et al., 2022) simultaneously considers semantic similarity and instance similarity, and encourages the same class predictions and similar similarity relationships for the same instance. Apart from these methods, an explicit consistency regularization is also widely used (Laine & Aila, 2016; Berthelot et al., 2020; Miyato et al., 2019; Ganev & Aitchison, 2020; Chen et al., 2023; Li et al., 2021). **Confidence Calibration.** Guo et al. (2017) is the first to point out the calibration problem in modern classifiers. They propose Temperature Scaling (TS) to rescale confidence distribution for preventing over-confident. Ensemble TS (Zhang et al., 2020) further extends the representation ability of TS by extending the parameter space. Besides, Kumar et al. (2018) propose the MMCE method, a trainable calibration regularization based on RKHS. However, these methods are restricted to the fully-supervised setting where the ground-truth label is available. **Core Set Selection.** Most methods for selecting core set focus on the fully-supervised setting. Paul et al. (2021) propose the EL2N method, where the norm of the loss over an example is used to measure its importance. By keeping the most important examples, EL2N significantly reduces the training time at the cost of minor accuracy reduction. Killamsetty et al. (2021a) further propose GradMatch, which extended the core dataset to a weighted set by a submodular function. RETRIEVE (Killamsetty et al., 2021b) is most related to our work since it is designed for SSL. RETRIEVE formulates the core set selection as an optimizing problem. However, we argue that the optimizing function in RETRIEVE only considers the loss on the labeled training set, which may lead to a deviation from the desired results (i.e. minimizing the loss on the validation set). ### 3 Confidence Calibration with VCC Most existing calibration methods are not suitable for SSL due to the absence of ground-truth labels for unlabeled examples. Taking the original confidence score for pseudo-label selection will cause unstable results. Hence, we employ three different consistency scores ($s_{ens}$, $s_{dem}$ and $s_{view}$) to simultaneously measure the stability of prediction. By combing the three scores, we can obtain the approximated calibrated confidence $\tilde{r}$, which is closer to the probability of an example being correctly classified. However, $\tilde{r}$ is not directly used for pseudo-label selection since the process of estimating $\tilde{r}$ from three consistency scores is still unstable on some examples. Hence, we introduce VAE to reconstruct $\tilde{r}$ for selecting the pseudo-label. The graphical model and framework illustration of VCC is given by Fig. 1 and 2 respectively. The VAE is learned jointly with the original classifier in training, where $\tilde{r}$ is supposed to be the "ground-truth" to calculate the reconstruction loss. For selecting pseudo-label, we employ the output of VAE as the calibrated confidence. #### 3.1 Ensemble Consistency From the perspective of Bayesian, the parameters $\theta$ of a model are sampled from a probability distribution over the training set $D$. The model’s prediction for a sample $x$ can be formulated as: $$p(y|x, D) = \int p(y|x, \theta)p(\theta|D)d\theta,$$ where $p(y|x, \theta)$ represents the probability distribution of the label $y$ of $x$ given the parameters $\theta$, and $p(\theta|D)$ represents the probability distribution of the model parameters $\theta$ trained on the dataset $D$. A single model may provide incorrect predictions for example $x$ due to randomness and noise, even if the confidence is high. Considering the entire parameter space, if all model parameters yield consistent predictions for $x$, the result is more convincing. In this case, the prediction can be viewed as an ensemble of predictions from multiple models. However, due to the large parameter space of $\theta$, direct computation of Eq. 2 is intractable. In this study, we apply Monte-Carlo Dropout (Gal & Ghahramani, 2016) on the linear layer to approximate... the computation of Eq. (2). The feature map is cloned by $K$ copies, followed by a Dropout layer to randomly eliminate neural connections in the classification head to obtain predictions. By doing so, the model will generate $K$ estimated confidence distributions of example $i$, the expectation can be treated as the ensemble of $K$ different models: $$\hat{y}_i = p(y|x, \text{Dropout}(\theta)), \quad \bar{\hat{y}} = \frac{1}{K} \sum_{i=1}^{K} \hat{y}_i.$$ (3) Then, entropy is employed as the ensemble-consistency score to measure the different models’ consistency of example: $s_{ens} = -\sum_{c=1}^{M} \bar{\hat{y}}_c \log \bar{\hat{y}}_c$, where $M$ is the number of classes and $c$ is the index of the category. ### 3.2 Temporal Consistency In SSL, the model parameters are updated during training, causing the decision boundary to change frequently. Some examples may shift from one side of the decision boundary to the other after parameter updates, resulting in a change in classification results. In this case, even for examples with high confidence at the current step, their prediction results may be unstable. If these examples are used in training, it may result in incorrect pseudo labels and hinder the model’s performance. To measure the stability of prediction results between different stages, we propose the temporal consistency score, which considers the changes in confidence distribution of an example between different epochs. Specifically, let $y^t_i$ represent the confidence distribution of an example at epoch $t$. The temporal consistency score can be calculated as: $$s_{temp} = D_{KL} \left( y^t_i \middle\| \frac{1}{K} \sum_{k=1}^{K} y^{t-k}_i \right) = \sum_{c=1}^{M} y^t_c \log \left( \frac{y^t_c}{\frac{1}{K} \sum_{k=1}^{K} y^{t-k}_c} \right),$$ (4) where $D_{KL}$ represents the Kullback-Leibler Divergence, $M$ is the number of classes, and $K$ is a hyperparameter representing the window size. In experiments, we empirically set $K = 1$ to preserve the sensitivity of abnormal confidences. Although both consider the problem from the perspective of time, our temporal-consistency method is very dissimilar from the time-consistency method proposed by Zhou et al. (2020). ### 3.3 View Consistency Multi-view learning (Xu et al., 2015) aims to use multiple perspectives to predict data, so that different predictors can correct the prediction together. In SSL, to obtain models with different views, one approach is to divide the whole dataset into multiple subsets for training multiple models. However, it incurs high model training costs. In the meanwhile, the volume of labeled data in each subset would be too small to train a decent model. To address this issue, we use Exponential Moving Average (EMA) to construct models with different views. The original model parameter $\theta$ is updated using gradient descent, while $\theta_{ema}$ is updated using the EMA scheme: $$\theta^t_{ema} = \theta^t \cdot \beta + \theta^{t-1}_{ema} \cdot (1 - \beta),$$ where $\beta$ is a decay hyperparameter. Therefore, they can be treated as two different views from the same network structure. Typically, a classification model is composed of a feature extraction network (backbone) and a classification head (linear layer). To further increase the difference between the two views, we adopt a cross-feature trick. The backbone of each view first extracts features from the input, then fed into the classification head of the other view. It can be formulated as: $$y = p(y|x, \theta^{backbone}, \theta^{head}_{ema}), \quad y_{ema} = p(y|x, \theta^{backbone}, \theta^{head}).$$ After obtaining the outputs, the KL divergence is used to measure the consistency between them: $$s_{view} = D_{KL}(y||y_{ema}).$$ It may seem like that temporal consistency and view consistency appear to overlap to some extent, as the predictions of the EMA model used in view consistency can also be considered an ensemble of predictions from past epochs. The difference is that the cross-feature trick is used at the computing of view consistency, which enforces this metric to focus more on consistency over multiple views rather than multiple time steps. 3.4 APPROXIMATION OF CALIBRATED CONFIDENCE We have introduced three scores to evaluate the stability of prediction. However, $s_{ens}$, $s_{tem}$ and $s_{view}$ cannot be directly used for pseudo-label selection, which is based on confidence scores. To address this, we propose a simple method for approximating the calibrated confidence with the three consistency scores. The consistency scores are first normalized and summed up as the stability score. Then, we use interpolation to approximate the calibrated confidence $\tilde{r}_u$. Please refer to Appendix A for technical details. 3.5 RECONSTRUCT $\tilde{r}_u$ WITH VARIATIONAL AUTO ENCODER (VAE) In Sec. 3.4, we combined three consistency scores to obtain $\tilde{r}_u$, which is the approximation of calibrated confidence scores. However, it may face instability due to the update of queue $q$ and abnormal interpolation endpoint (detail mentioned in Appendix A). To address this, we reconstruct the statistical-based $\tilde{r}_u$ in a learning-based way. Specifically, a VAE is employed to generate the calibrated confidence score $r_u$ for pseudo-label selection, and $\tilde{r}_u$ is used as input for training VAE. We assume $r$ is generated by the following random process, which includes two steps: (1) a hidden variable $z$ which is sampled from a prior distribution $p_\theta(z)$; (2) a value $r$ is generated from the conditional distribution $p_\theta(r|c,z,x)$: $$p_\theta(r|c,x) = \int_z p_\theta(z)p_\theta(r|z,c,x)dz.$$ However, the marginal likelihood $p_\theta(r|c,x)$ is intractable generally. Hence another distribution $q_\phi(z|c,x)$ is introduced as the approximation of $p_\theta(z)$ (please refer to Appendix B for details): $$\log p_\theta(r|c,x) = \int_z q_\phi(z|c,x) \log p_\theta(r|c,z,x)dz$$ $$\geq \mathbb{E}_{q_\phi(z|c,x)} \log p_\theta(r|c,z,x) - D_{KL}(q_\phi(z|c,x)||p_\theta(z|c,x)).$$ The first term is the likelihood of calibration reconstruction (denoted as $L_{VCC}^{recon}$), where $q_\phi(z|c,x)$ is the encoder to infer the hidden variable $z$, and $p_\theta(r|c,z,x)$ is the decoder to recover a calibrated confidence $r$. To compute the reconstruction loss, the approximated $\tilde{r}$ (Eq. 20 in Appendix A) is used as the ground truth. Besides, $z$ need to be sampled from $q_\phi(z|c,x)$. We use the reparameterization trick (Kingma & Welling, 2014) that uses the encoder to predict the mean and standard deviation of $z$. By setting \( \epsilon \sim \mathcal{N}(0, 1) \), the reparameterization is formulated as \( z = \mu(c, x) + \epsilon \cdot \sigma(c, x) \). As for the second term, under the Gaussian assumptions of the prior \( p_\theta(z|c, x) \sim \mathcal{N}(0, 1) \) and the approximator \( q_\phi(z|c, x) \sim \mathcal{N}(\mu(c, x), \sigma^2(c, x)) \), we have: \[ L_{KL}^{VCC} \doteq D_{KL}(q_\phi(z|c, x)||p_\theta(z|c, x)) = -\log \sigma + \frac{\mu^2 + \sigma^2}{2} - \frac{1}{2}. \] (10) The overall objective function for model training can be formulated as: \[ L = L_{lab} + \lambda_{unlab} \cdot L_{unlab} + \lambda_{VCC} \cdot (L_{recon}^{VCC} - L_{KL}^{VCC}). \] (11) Although we generate more accurate confidence score by combining three consistencies, this confidence score is still not as optimal as the inaccessible ground-truth. This is because there are many other “nuisance” and untraceable factors that affect the approaching of pseudo label towards the ground-truth, such as the randomness of the neural networks. Under this circumstance, directly approaching the unreliable target may still degrade the performance. The original VAE is proposed to learn continuous distribution features from discontinuous distributions by sampling a hidden variable. This process is suitable for suboptimal pseudo label learning, because the approaching of the prediction to the generated pseudo label can be viewed as the process of the approaching of the prediction to the ground-truth. Since eliminating those nuisance factors cannot be tractable, we use VAE to simulate this process instead of the MLP. 4 Core Set Selection with INFUSE In the previous section, we introduce the VCC framework, which ensures well-calibrated confidence scores to improve the accuracy in pseudo-label selection. Nonetheless, as we previously discussed, training the SSL model still encounters substantial computational expenses. Furthermore, the incorporation of the additional encoder and decoder of the VCC introduces an extra computation overhead. To address these challenges, we present INFUSE—a core set selection methodology aimed at efficient example selection. Based on influence function [Koh & Liang (2017)], INFUSE allows for training the SSL model by using only a subset of the complete unlabeled dataset, so that the training time can be significantly reduced. In SSL, the model should minimize the loss on the validation set to obtain the highest generalization accuracy: \[ \min \ L(V, \theta^*), \quad \text{s.t.} \quad \theta^* = \arg \min \ R(\theta), \] (12) \[ R(\theta) \doteq \mathbb{E}_{(x,y) \in S}[H(q_x, y)] + \lambda \cdot \mathbb{E}_{u \in U}[\mathbbm{1}(\max(q_u) \geq \tau) \cdot H(q_u, p(y|u))]. \] Here \( H \) is the loss function, \( \tau \) is the threshold for pseudo label selection, \( q \) is the confidence distribution, \( q \) is the pseudo label, and \( R(\theta) \) is the total loss on labeled dataset \( S \) and unlabeled dataset \( U \). Now assume the weight of an unlabeled example \( u' \) is increased by \( \epsilon \). Denote \( L_U(u', \theta) = \lambda \cdot \mathbbm{1}(\max(q_{u'}) \geq \tau) \cdot H(q_{u'}, p(y|u')) \), the optimal model parameters corresponding to the new training set become: \[ \hat{\theta} = \arg \min \ R(\theta) + \epsilon \cdot L_U(u', \theta). \] (13) In Eq. [13], \( \hat{\theta} \) minimizes the loss function on the training set, which means the gradient w.r.t \( \hat{\theta} \) is 0: \[ \nabla_\theta R(\hat{\theta}) + \epsilon \nabla_\theta L_U(u', \hat{\theta}) = 0. \] (14) Using a Taylor-series approximation at \( \theta^* \), Eq. [14] can be rewritten as: \[ \nabla_\theta R(\theta^*) + \epsilon \cdot \nabla_\theta L_U(u', \theta^*) + (\nabla^2_\theta R(\theta^*) + \epsilon \cdot \nabla^2_\theta L_U(u', \theta^*)) \cdot (\hat{\theta} - \theta^*) = 0, \] (15) which gives (please refer to Appendix C for details): \[ \hat{\theta} - \theta^* \approx - (\nabla^2_\theta R(\theta^*))^{-1} \cdot \epsilon \nabla_\theta L_U(u', \theta^*) \doteq - \epsilon \cdot H^{-1}_\theta \nabla_\theta L_U(u, \theta). \] (16) With the help of chain rule \( \frac{dL}{d\epsilon} = \frac{dL}{d\theta} \cdot \frac{d\theta}{d\epsilon} \), the importance of an unlabeled example can be estimated: \[ \text{score}_\theta(u) = \frac{dL(V, \theta)}{d\epsilon} = \nabla_\theta L(V, \theta)^\top \frac{d\theta}{d\epsilon} = -\nabla_\theta L(V, \theta)^\top H^{-1}_\theta \nabla_\theta L_U(u, \theta). \] (17) Table 1: Comparison of error rate (%) for different methods under various settings. | Method | CIFAR-10 | CIFAR-100 | SVHN | |-----------------|----------|-----------|------| | | 40 | 250 | 2500 | 10000 | 40 | 250 | 1000 | | PL | 76.29±1.08 | 48.28±2.01 | 14.91±0.20 | 87.15±0.47 | 59.09±0.61 | 38.86±0.09 | 75.95±3.39 | 16.60±1.13 | 2.33±0.58 | | UDA | 8.01±1.34 | 5.12±0.15 | 4.32±0.07 | 53.44±2.06 | 34.37±0.28 | 27.52±0.10 | 2.03±0.02 | 2.03±0.03 | 1.96±0.01 | | VAT | 76.42±2.57 | 42.58±6.67 | 10.97±0.19 | 83.11±0.27 | 53.17±0.57 | 36.58±0.21 | 77.04±6.59 | 4.59±0.13 | 4.09±0.21 | | MeanTeacher | 76.93±2.29 | 56.06±2.03 | 15.47±0.43 | 90.34±0.65 | 61.13±0.57 | 39.05±0.12 | 81.94±1.33 | 25.10±3.17 | 12.29±0.45 | | MixMatch | 70.94±1.25 | 37.28±0.61 | 7.38±0.06 | 79.95±0.29 | 49.58±0.62 | 32.10±0.13 | 79.95±5.78 | 3.31±0.20 | 3.12±0.09 | | ReMixMatch | 11.54±2.58 | 4.37±0.05 | 3.71±0.01 | 57.14±0.32 | 26.30±0.23 | 17.08±1.89 | 6.38±1.09 | 2.95±0.45 | | SoftMatch | 15.01±3.70 | 5.13±0.28 | 4.27±0.12 | 53.98±0.34 | 34.47±0.12 | 27.72±0.03 | 31.97±1.19 | 2.05±0.03 | | CoMatch | 5.06±0.70 | 4.84±0.26 | 4.27±0.12 | 49.64±1.46 | 33.05±0.12 | 27.26±0.03 | 2.31±0.01 | 2.15±0.05 | | FixMatch | 5.44±0.05 | 5.33±0.12 | 4.29±0.04 | 60.98±0.77 | 37.24±0.24 | 28.15±0.16 | 9.51±5.59 | 2.21±0.20 | 1.96±0.07 | | VCC-FixMatch | 6.84±0.52 | 4.68±0.04 | 4.27±0.21 | 46.47±0.05 | 28.09±0.06 | 22.21±0.02 | 2.98±1.23 | 1.99±0.05 | 1.98±0.06 | | FlexMatch | 4.98±0.01 | 5.00±0.05 | 4.24±0.07 | 40.43±0.63 | 26.38±0.17 | 21.83±0.08 | 3.30±0.37 | 5.02±1.20 | 5.45±0.46 | | VCC-FlexMatch | 4.90±0.10 | 4.65±0.07 | 4.14±0.15 | 37.98±0.25 | 25.75±0.11 | 21.48±0.07 | 2.62±0.08 | 4.97±0.08 | 2.71±1.19 | | SimMatch | 3.06±0.37 | 4.41±0.35 | 3.90±0.01 | 37.81±2.21 | 24.04±0.32 | 20.56±0.11 | 3.10±0.72 | 2.27±0.12 | 2.07±0.03 | | VCC-SimMatch | 5.27±0.34 | 4.76±0.14 | 3.87±0.24 | 37.22±0.04 | 24.99±0.13 | 20.61±0.03 | 3.04±0.02 | 2.20±0.01 | 3.39±0.02 | | Fully-Supervised| 4.38±0.05 | | | 17.67±0.08 | | | 2.07±0.02 | | Table 2: The error rate results (%) of different methods on STL-10 dataset. | Labels | MeanTeacher | MixMatch | ReMixMatch | FixMatch | w/VCC | FlexMatch | w/VCC | SimMatch | w/VCC | |--------|-------------|----------|------------|----------|-------|-----------|-------|----------|-------| | 40 | 71.72 | 34.93 | 32.12 | 35.97 | 30.63(±0.53) | 29.15 | 28.14(±1.03) | 27.84 | 26.97(±0.87) | | 1000 | 33.90 | 21.70 | 6.74 | 6.25 | 5.31(±0.04) | 5.77 | 5.52(±0.25) | 5.91 | 5.51(±0.40) | Table 3: The error rate, ECE (Guo et al., 2017), MCE (Guo et al., 2017) and ACE (Nixon et al., 2019) results of different methods on CIFAR-100 dataset with 400/2500/10000 labeled examples. Eq. (17) is used to compute scoreθ(u) for each unlabeled example. The unlabeled examples with the highest score are preserved to build the core set and others will be simply dropped. In our implementation, the INFUSE score is calculated batch-wise to reduce the computation overhead. Besides, we use the identity matrix to approximate the inverse Hessian \(H_{\theta}^{-1}\) (Luketina et al., 2016) for efficiency. The last problem is how to compute \(V_{\theta} L(V, \theta)\) when the ground-truth label of examples in \(V\) is unavailable in training. To address this, we propose a feature-level mixup to build a support set \(S\). Then, the gradient on the validation set is approximated by \(L(S, \theta)\). Please refer to Appendix D for details. 5 EXPERIMENTS The effectiveness of our method is evaluated on standard SSL datasets: CIFAR10/100 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), STL-10 (Coates et al., 2011). We follow the most commonly used SSL setting (Sohn et al., 2020) to train the model (please refer to Appendix E for more details). Specifically, the keep ratio \(k\) controls the size of core set. Taking \(k = 10\%\) for example, the amount of examples in core set is \(10\% \times |U|\), and the total steps also become 10% of the original iterations. 5.1 Main Results In this section, we demonstrate the effectiveness of VCC and INFUSE separately, then combine them to achieve more efficient and accurate pseudo-label selection in SSL. As mentioned before, VCC is a general confidence calibration plugin, which makes it possible to combine VCC with existing SSL methods flexibly. In experiments, we choose the popular FixMatch (Sohn et al., 2020), FlexMatch (Zhang et al., 2021), and SimMatch (Zheng et al., 2022) as the | Method | CIFAR-10 | CIFAR-100 | STL-10 | SVHN | |-----------------|----------|-----------|--------|------| | | 250 label| 4000 label| 2500 label| 10000 label| 250 label| 250 label| | | 10% 20% 40%| 10% 20% 40%| 10% 20% 40%| 10% 20% 40%| 10% 20%| 10% 20%| | Random | 9.12 6.87 6.51| 5.26 5.01| 31.55 31.11| 28.86 23.19| 22.51| 16.62 14.37| 3.85 4.65| | Earlistop | 7.47 6.03 6.85| 4.86 4.52| 29.21 28.85| 27.30 23.03| 22.61| 16.31 13.20| 2.93 3.08| | EL2N | 8.55 7.47 6.70| 4.94 4.54| 31.55 31.27| 28.42 23.12| 22.21| 16.27 12.92| 3.66 3.61| | GradMatch | 6.71 5.87 5.60| 4.72 4.45| 28.95 28.48| 26.71 22.72| 22.21| 16.05 12.90| 2.90 2.63| | RETRIEVE | 6.60 6.02 5.48| 4.68 4.41| 28.75 28.34| 26.68 22.56| 22.18| 16.05 12.90| 2.90 2.63| | INFUSE (Ours) | **6.29** **5.69** **5.33**| **4.51** **4.34**| **28.83** **28.05**| **26.47** **22.28**| **21.97**| **15.84** **12.71**| **2.61** **2.46**| | Full Unlabeled Data | 4.98 | 4.19 | 26.49 | 21.90 | 8.23 | 3.80 | Table 4: Comparison of error rate (%) for core set selection methods on different datasets with varying example keep raito (from 10% to 60%). | Method | Error Rate (%) | Training time (GPU Hours) | |-------------------------|----------------|---------------------------| | Dash(RandAug) | 27.15 | - | | MPL | 27.71 | - | | FixMatch | 28.03 | 221.91 | | FlexMatch | 26.49 | 223.96 | | VCC-FlexMatch (Ours) | **25.26** | **253.53** | | VCC-INFUSE-FlexMatch (Ours, keep raito=40%) | **25.41** | **115.47** | Table 5: The error rate and training time of different methods on CIFAR-100 dataset with 2500 labeled data. The GPU Hours metric is calculated based on the A100 GPU. A basic module to build VCC-FixMatch, VCC-FlexMatch, and VCC-SimMatch. We report the mean value and the standard deviation of three random independent trials of each setting, with the results shown in Table 4. All three baseline methods (FixMatch, FlexMatch, SimMatch) achieve accuracy improvements when combined with VCC for confidence calibration. Specifically, the improvements of VCC is more significant in the case that the amount of labeled examples is small. Taking the results on CIFAR-100 as an example, when only 400 labeled examples are available, VCC-FlexMatch reduces the error rate of FlexMatch from 46.47% to 43.31% (-3.16%). Similar boost is also produced when running on STL-10 dataset as shown in Table 2, where VCC reduces the error rate of FixMatch by 5.34% (from 35.97% to 30.63%) with only 40 labels. To further verify the source of the accuracy improvement of VCC, we calculate the calibration error of different methods. As shown in Table 5, both VCC-FixMatch and VCC-FlexMatch achieve lower calibration errors compared to the baseline methods under various settings. VCC-SimMatch also achieves lower ECE and ACE metrics when only 400 labeled examples are available. However, the MCE metric is deteriorated, which is attributed to the fact that MCE considers the worst-calibrated bucket and introduces some fluctuations. Under the setting of using 10,000 labeled examples, the results of VCC-SimMatch and SimMatch are very close. This is partly because a larger number of labeled examples can naturally improve the model’s performance and reduces the calibration error. Besides, SimMatch has employed instance similarity for rescaling the confidence score, which may reduce the benefits brought by VCC. The results of INFUSE and other core set selection methods (e.g. RETRIEVE (Killamsetty et al., 2021b)) are shown in Table 4. On the CIFAR-10 dataset, INFUSE achieves a relatively low error rate (6.29%) using only 10% of the examples, indicating the original unlabeled data is redundant and proving the significance of core set selection in SSL. With the increase of the keep ratio, the gap between INFUSE and the non-pruned setting becomes smaller. For example, on the CIFAR-100 dataset when the amount of labeled data is 2500 and the keep ratio is 40%, INFUSE achieves an error rate of 26.47% while the baseline is 26.49%. When compared with other core set selection methods, INFUSE also achieves lower error rates in most settings. The results above show the effectiveness of VCC and INFUSE respectively. By combining two together, we propose the VCC-INFUSE method. The results are shown in Table 5. VCC-INFUSE achieves a better trade-off between model performance and computation costs. Compared to FlexMatch, VCC-INFUSE-FlexMatch can not only reduce the error rate from 26.49% to 25.41% (-1.08%), but also reduce the training time from 223.96 GPU Hours to 115.47 GPU Hours (-48.44%). | Method | Error Rate (%) | |---------------------------------------------|----------------| | FixMatch (Sohn et al., 2020) | 25.07 | | FlexMatch (Zhang et al., 2021) | 25.87 | | SimMatch (Zheng et al., 2022) | 61.54 | | FixMatch+DASO (Oh et al., 2022) | 24.63 | | FixMatch+DebiasPL (Wang et al., 2022) | 24.42 | | FixMatch+DARP (Kim et al., 2020) | 22.93 | | FixMatch+Adsh (Guo & Li, 2022) | 21.88 | | FixMatch+VCC (Ours) | **21.16** | Table 6: The error rate (%) of different methods on CIFAR-10-LT under the class imbalance setting. | Reconstruct $\tilde{r}_u$ by VAE | ER(%) | ECE | MCE | ACE | |----------------------------------|-------|-----|-----|-----| | $x$ | 25.76 | 0.160 | 0.411 | 0.168 | | $\checkmark$ | 25.26 | 0.147 | 0.324 | 0.163 | Table 7: The error rate of VCC with or without reconstructing calibrated confidence on CIFAR-100 dataset with 2500 labeled examples. ### 5.2 Supplementary Results **Class Imbalance SSL.** We also design the experiments for more realistic settings such as class imbalance. The experiment results are shown in Table 6 (please refer to Appendix G for experiments settings). While FixMatch surpasses FlexMatch by 0.8% with an error rate of 25.07%, SimMatch only achieves 61.54%, which shows a total failure on this task. DASO and DebiasPL slightly reduce the error rate to 24.63% and 24.42%, respectively. DARP achieves better performance with an error rate of 22.93%. However, the proposed VCC, which is not designed for imbalance-SSL specifically, produces the lowest error rate of 21.16%, which is 0.72% lower than the second-best method Adsh. The results further prove our method’s ability to reduce bias and bring a more accurate pseudo-label. **The Effectiveness of Reconstructing Calibrated Confidence by VAE.** In VCC, we first approximate the calibrated confidence to obtain $\tilde{r}_u$, then use VAE to reconstruct it to obtain $r_u$, which will be used in pseudo-label selection. The objective of reconstruction aims at alleviating the randomness of statistical approximation. To demonstrate the necessity, we conduct the ablation study. As shown in Table 7, VCC with reconstruction further reduces the error rate by 0.50%. **VCC v.s. other calibration methods.** Although most calibration methods for the fully-supervised setting are unsuitable in SSL, the pseudo-label can be used to approximate ground truth. We choose Ensemble-TS (Zhang et al., 2020) and MMCE (Kumar et al., 2018) as the baseline to compare with VCC. As shown in Table 8 (Appendix E), the error rate of MMCE is the highest (28.44%). The reason is that MMCE directly uses the pseudo-label to calculate the calibration regularization, while the incorrect pseudo-label may bring the noise. As for Ensemble-TS, it uses pseudo-label to search the optimal parameter scaling, which can alleviate the problem of incorrect pseudo-label to some extent (ER=26.36%). As a comparison, VCC achieves the lowest error rate (25.26%) and the best calibration performance. **The Ablation Study of Three Consistency Scores.** We use view consistency, temporal consistency, and ensemble consistency for estimating $\tilde{r}$. The three consistency scores are designed to reflect the stability of prediction from different perspectives. To analyze their contribution, we conduct the ablation study (Table 9 in Appendix F). As we can see, each consistency score contributes to the estimation of a more accurate $\tilde{r}$ so that a lower error rate can be achieved. ### 6 Conclusion In this paper, we addressed the challenges of leveraging large-scale unlabeled data in SSL and proposed two novel methods, VCC and INFUSE, to improve the effectiveness and efficiency of data selection. As a general plugin, VCC significantly improves the accuracy of FixMatch, FlexMatch and SimMatch on multiple datasets. Simultaneously, INFUSE achieves competitive or even lower error rates with partial unlabeled data. By combining two together, VCC-INFUSE achieves a lower error rate with less computation overhead. The future work is to extend VCC-INFUSE to more SSL tasks (e.g. object detection, segmentation) to verify its generalization. REFERENCES David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. MixMatch: A Holistic Approach to Semi-Supervised Learning. In *Advances in Neural Information Processing Systems, Vancouver, British Columbia, Canada*, 2019. David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring. In *International Conference on Learning Representations, Addis Ababa, Ethiopia*, 2020. Hao Chen, Ran Tao, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Bhiksha Raj, and Marios Savvides. Softmatch: Addressing the quantity-quality tradeoff in semi-supervised learning. In *The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023*, 2023. Adam Coates, Andrew Y. Ng, and Honglak Lee. An Analysis of Single-Layer Networks in Unsupervised Feature Learning. In *Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, USA*, 2011. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In *Conference on Computer Vision and Pattern Recognition, Miami, Florida, USA*, 2009. Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In *Proceedings of the 33rd International Conference on Machine Learning, New York City, NY, USA*, 2016. Stoil Ganev and Laurence Aitchison. Semi-supervised Learning Objectives as Log-likelihoods in a Generative Model of Data Curation. *arXiv preprint arXiv:2008.05913*, 2020. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On Calibration of Modern Neural Networks. In *International Conference on Machine Learning, Sydney, NSW, Australia*, 2017. Lan-Zhe Guo and Yu-Feng Li. Class-Imbalanced Semi-Supervised Learning with Adaptive Thresholding. In *International Conference on Machine Learning, Baltimore, Maryland, USA*, 2022. KrishnaTeja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, Abir De, and Rishabh K. Iyer. GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient Deep Model Training. In *Proceedings of the International Conference on Machine Learning, Virtual Event*, 2021a. KrishnaTeja Killamsetty, Xujiang Zhao, Feng Chen, and Rishabh K. Iyer. RETRIEVE: Coreset Selection for Efficient and Robust Semi-Supervised Learning. In *Advances in Neural Information Processing Systems, virtual*, 2021b. Jaehyung Kim, Youngbum Hur, Sejun Park, Eunho Yang, Sung Ju Hwang, and Jinwoo Shin. Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning. In *Advances in Neural Information Processing Systems, virtual*, 2020. Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In *International Conference on Learning Representations, Banff, AB, Canada*, 2014. Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In *Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017*, 2017. Alex Krizhevsky, Geoffrey Hinton, et al. Learning Multiple Layers of Features from Tiny Images. In *Doctoral dissertation, University of Toronto*, 2009. Aviral Kumar, Sunita Sarawagi, and Ujjwal Jain. Trainable Calibration Measures For Neural Networks From Kernel Mean Embeddings. In *International Conference on Machine Learning, Stockholm, Sweden*, 2018.
XasWgF5WsZ
The necessity of the FEI coefficient is vague. Making the FEI coefficient as small as possible is equivalent to directly removing all noise, i.e., h(t)=0. And the trivial question arises; Why not just directly use the ODE solver and the Taylor-approximation-based higher-order sampler?
ELUCIDATING THE SOLUTION SPACE OF EXTENDED REVERSE-TIME SDE FOR DIFFUSION MODELS Anonymous authors Paper under double-blind review ABSTRACT Diffusion models (DMs) demonstrate potent image generation capabilities in various generative modeling tasks. Nevertheless, their primary limitation lies in slow sampling speed, requiring hundreds or thousands of sequential function evaluations through large neural networks to generate high-quality images. Sampling from DMs can be seen alternatively as solving corresponding stochastic differential equations (SDEs) or ordinary differential equations (ODEs). In this work, we formulate the sampling process as an extended reverse-time SDE (ER SDE), unifying prior explorations into ODEs and SDEs. Leveraging the semi-linear structure of ER SDE solutions, we offer exact solutions and approximate solutions for VP SDE and VE SDE, respectively. Based on the solution space of the ER SDE, we yield mathematical insights elucidating the superior performance of ODE solvers over SDE solvers in terms of fast sampling. Additionally, we unveil that VP SDE solvers stand on par with their VE SDE counterparts. Finally, we devise fast and training-free samplers, ER-SDE-Solvers, achieving state-of-the-art performance across all stochastic samplers. Experimental results demonstrate achieving 3.45 FID in 20 function evaluations and 2.24 FID in 50 function evaluations on the ImageNet $64 \times 64$ dataset. 1 INTRODUCTION Figure 1: Samples generated using Ours (ER-SDE-Solver-3) on ImageNet $128 \times 128$ (Left, FID=8.33), FFHQ $64 \times 64$ (Middle, FID=4.67) and CIFAR-10 (Right, FID=3.15) when NFE=20. Diffusion Models (DMs) demonstrate an aptitude for producing high-quality samples and exhibit a stable training process that is resilient to disruptions. They have found extensive utilization in diverse generative modeling tasks, encompassing image synthesis (Dhariwal & Nichol [2021], Ho et al. [2020], Song et al. [2021]), image super-resolution (Saharia et al. [2022b], Gao et al. [2023]), image restoration (Chung et al. [2022], Luo et al. [2023]), image editing (Avrahami et al. [2022], Meng et al. [2021]), image-to-image translation (Zhao et al. [2022], Su et al. [2022]), and similar domains. However, in comparison to alternative generative models like Generative Adversarial Networks (GANs) (Goodfellow et al. [2014]), DMs frequently necessitate multiple function evaluations, constraining their applicability in real-time scenarios. Forward SDE: \[ dx_t = f(t)x_t \, dt + g(t)dw_t \] Extended Reverse SDE: \[ dx_t = \left[ f(t)x_t - \frac{g^2(t) + h^2(t)}{2} \nabla_x \log p_t(x_t) \right] dt + h(t)d\bar{w}_t \] Figure 2: A unified framework for DMs: The forward process described by an SDE transforms real data into noise, while the reverse process characterized by an ER SDE generates real data from noise. Once the score function \( \nabla_x \log p_t(x_t) \) is estimated by a neural network, solving the ER SDE enables the generation of high-quality samples. Hence, a substantial body of research revolves around designing fast diffusion model samplers. Fast samplers of DMs can be categorized into training-based and training-free methods. While training-based methods can generate high quality samples at 2~4 sampling steps (Salimans & Ho [2021], Meng et al. [2023], Song et al. [2023]), the prerequisite for retraining renders them cost-intensive. Conversely, training-free methods directly utilize raw information without retraining, offering broad applicability and high flexibility. Song et al. [2021b] have indicated that the image generation process is equivalent to solving stochastic differential equations (SDEs) or ordinary differential equations (ODEs) in reverse time. The essence of training-free methods lies in designing efficient solvers for SDEs (Bao et al. [2022], Zhang et al. [2023]) or ODEs (Lu et al. [2022a], Zhang & Chen [2023]). Presently, accomplishments based on ODE solvers have been particularly notable. For instance, UniPC (Zhao et al. [2023]) achieves rapid sampling up to 10 steps, yet exploration into works based on SDE solvers remains limited. Compared with ODE-based deterministic sampling, SDE-based stochastic samplers are more challenging due to the injection of additional noise into the data state at each sampling step. However, observations indicate that SDE-based stochastic samplers show promise in producing data of superior quality when increasing sampling steps (Song et al. [2021b], Xue et al. [2023]), motivating us to explore the efficient SDE-based stochastic samplers further. In this work, we present a unified framework for DMs, wherein Extended SDE formulation is proposed (see Fig 2). Within this framework, we define a solution space and design some highly effective SDE solvers. Specifically, we first model the sampling process as an Extended Reverse-Time (ER) SDE, which is an extension of Song et al. [2021b] and Zhang & Chen [2023]. Inspired by Lu et al. [2022a], we unveil the semi-linear structure inherent in the solutions of the ER SDE—comprising linear functions of data variables, nonlinear functions parameterized by neural networks and noise terms. Building on it, we deduce exact solutions for both Variance Exploding (VE) SDE and Variance Preserving (VP) SDE (Song et al. [2021b]). This is achieved by the analytical computation of the linear portions and noise terms, thereby circumventing associated discretization errors. We also offer practical approximations for both VP SDE and VE SDE. By analyzing the errors of the approximate solutions of the ER SDE, we discover that varying levels of discretization errors emerge due to the incorporation of different noise scales during the reverse process. This phenomenon gives rise to the solution space inherent in the ER SDE. We ascertain that the optimal solution within this space aligns with ODE, which mathematically demonstrates that the performance of SDE-based methods falls short when the number of steps is limited. We also theoretically establish that the VP SDE solvers yield image quality equivalent to VE SDE solvers, given the consistency of the employed pretrained models. Moreover, through selecting the noise scale functions carefully, we devise some specialized Extended Reverse-Time SDE Solvers (ER-SDE-Solvers), which rival ODE solvers in terms of rapid sampling. In summary, we have made several theoretical and practical contributions: 1) We formulate an ER SDE and provide an exact solution as well as approximations for VP SDE and VE SDE, respectively. 2) Through a rigorous analysis of errors in the approximate solutions, we establish mathematically that the performance of SDE solvers for rapid sampling is inferior to that of ODE solvers. Moreover, VP SDE solvers achieve the same level of image quality compared with VE SDE solvers. 3) We present specialized ER-SDE-Solvers for rapid sampling. Extensive experimentation reveals that ER- SDE-Solvers achieve state-of-the-art performance across all stochastic samplers. Specifically, they obtain 3.45 FID in 20 NFE and 2.24 FID in 50 NFE on the ImageNet $64 \times 64$ dataset, while 3.09 FID in 20 NFE and 1.97 FID in 50 NFE on the CIFAR-10 dataset. 2 DIFFUSION MODELS Diffusion models (DMs) represent a category of probabilistic generative models encompassing both forward and backward processes. During the forward process, DMs gradually incorporate noise at different scales, while noise is gradually eliminated to yield real samples in the backward process. In the context of continuous time, the forward and backward processes can be described by SDEs or ODEs. In this section, we primarily review the stochastic differential equations (SDEs) and ordinary differential equations (ODEs) pertinent to DMs. 2.1 FORWARD DIFFUSION SDEs The forward process can be expressed as a linear SDE (Kingma et al., 2021): $$dx_t = f(t)x_t dt + g(t)d\mathbf{w}_t, \quad x_0 \sim p_0(x_0),$$ (1) where $x_0 \in \mathbb{R}^D$ is a D-dimensional random variable following an unknown probability distribution $p_0(x_0)$. $\{x_t\}_{t \in [0,T]}$ denotes each state in the forward process, and $\mathbf{w}_t$ stands for a standard Wiener process. When the coefficients $f(t)$ and $g(t)$ are piecewise continuous, a unique solution exists (Øksendal, 2013). By judiciously selecting these coefficients, Eq.(1) can map the original data distribution to a priory known tractable distribution $p_T(x_T)$, such as the Gaussian distribution. The selection of $f(t)$ and $g(t)$ in Eq.(1) is diverse. Based on the distinct noise employed in SMLD (Song & Ermon, 2019) and DDPM (Ho et al., 2020; Sohl-Dickstein et al., 2015), two distinct SDE formulations (Song et al., 2021b) are presented. **Variance Exploding (VE) SDE:** The noise perturbations used in SMLD can be regarded as the discretization of the following SDE: $$dx_t = \sqrt{\frac{d\sigma_t^2}{dt}} d\mathbf{w}_t,$$ (2) where $\sigma_t$ is the positive noise scale. As $t \to \infty$, the variance of this stochastic process also tends to infinity, thus earning the appellation of Variance Exploding (VE) SDE. **Variance Preserving (VP) SDE:** The noise perturbations used in DDPM can be considered as the discretization of the following SDE: $$dx_t = \frac{d\log \alpha_t}{dt} x_t dt + \sqrt{\frac{d\sigma_t^2}{dt} - 2\frac{d\log \alpha_t}{dt}\sigma_t^2} d\mathbf{w}_t,$$ (3) where $\alpha_t$ is also the positive noise scale. Unlike the VE SDE, the variance of this stochastic process remains bounded as $t \to \infty$. Therefore, it is referred to as Variance Preserving (VP) SDE. 2.2 REVERSE DIFFUSION SDEs The backward process can similarly be described by a reverse-time SDE (Song et al., 2021b): $$dx_t = \left[f(t)x_t - g^2(t)\nabla_x \log p_t(x_t)\right] dt + g(t)d\mathbf{w}_t, \quad x_T \sim p_T(x_T),$$ (4) where $\mathbf{w}_t$ is the standard Wiener process in the reverse time. $p_t(x_t)$ represents the probability distribution of the state $x_t$, and its logarithmic gradient $\nabla_x \log p_t(x_t)$ is referred to as the score function, which is often estimated by a neural network $s_\theta(x_t,t)$. There are also studies (Zhang & Chen, 2023; 2021) that consider the following reverse-time SDE: $$dx_t = \left[f(t)x_t - \frac{1 + \lambda^2}{2}g^2(t)\nabla_x \log p_t(x_t)\right] dt + \lambda g(t)d\mathbf{w}_t, \quad x_T \sim p_T(x_T),$$ (5) where the parameter $\lambda \geq 0$. Eq.(5) similarly shares the same marginal distribution as Eq.(1). Once the score-based network \( s_\theta(x_t, t) \) is trained, generating images only requires solving the reverse-time SDE in Eq.(4) or Eq.(5). The conventional ancestral sampling method (Ho et al., 2020) can be viewed as a first-order SDE solver (Song et al., 2021b), yet it needs thousands of function evaluations to generate high-quality images. Numerous endeavors (Jolicoeur-Martineau et al., 2021; Bao et al., 2022) have sought to enhance sampling speed by devising highly accurate SDE solvers, but they still require hundreds of function evaluations, presenting a gap compared to ODE solvers. ### 2.3 REVERSE DIFFUSION ODES In the backward process, in addition to directly solving the reverse-time SDE in Eq.(4), a category of methods (Zhao et al., 2023; Lu et al., 2022a; Zhang & Chen, 2023) focuses on solving the probability flow ODE corresponding to Eq.(4), expressed specifically as \[ dx_t = \left[ f(t)x_t - \frac{1}{2}g^2(t)\nabla_x \log p_t(x_t) \right] dt, \quad x_T \sim p_T(x_T). \] Eq.(6) shares the same marginal distribution at each time \( t \) with the SDE in Eq.(4), and the score function \( \nabla_x \log p_t(x_t) \) can also be estimated by a neural network. Unlike SDEs, which introduce stochastic noise at each step, ODEs correspond to a deterministic sampling process. Despite several experiments (Song et al., 2021b) suggesting that ODE solvers outperform SDE solvers in terms of efficient sampling, SDE solvers can generate higher-quality images with a minimal increase in NFE. In summary, SDE-based methods can generate higher-quality samples, but exhibit slower convergence in high dimensions (Kloeden & Platen, 1992b; Lu et al., 2022a). Conversely, ODE-based methods demonstrate the opposite behavior. To strike a balance between high-quality and efficiency in the generation process, in Sec.3, we model the backward process as an extended SDE, and provide analytical solutions as well as approximations for both VP SDE and VE SDE. Furthermore, we devise some SDE solvers in Sec.4 whose efficiency is comparable to that of ODE solvers. ### 3 EXTENDED REVERSE-TIME SDE SOLVER FOR DMs There are three types of methods for recovering samples from noise in DMs. The first predicts the noise added in the forward process, achieved by a noise prediction model \( e_\theta(x_t, t) \) (Ho et al., 2020). The second utilizes a score prediction model \( s_\theta(x_t, t) \) to match the score function \( \nabla_x \log p_t(x_t) \) (Song et al., 2021b; Hyvärinen & Dayan, 2005). The last directly restores the original data from the noisy samples, achieved by a data prediction model \( x_\theta(x_t, t) \). These models can be mutually derived (Kingma et al., 2021). Previously, most SDE solvers (Song et al., 2021b; Jolicoeur-Martineau et al., 2021; Bao et al., 2022) have relied on the score-based model. Based on modeling the backward process as an extended SDE in Sec.3.1, we proceed to solve the VE SDE and VP SDE for the data prediction model in Sec.3.2 and Sec.3.3, respectively. Compared with the other two types of models, the data prediction model can be synergistically combined with thresholding methods (Ho et al., 2020; Saharia et al., 2022a) to mitigate the adverse impact of large guiding scales, thereby finding broad application in guided image generation (Lu et al., 2022b). #### 3.1 EXTENDED REVERSE-TIME SDE Besides Eq.(4) and Eq.(5), an infinite variety of diffusion processes can be employed. In this paper, we consider the following family of SDEs (referred to as Extended Reverse-Time SDE (ER SDE)): \[ dx_t = \left[ f(t)x_t - \frac{g^2(t) + h^2(t)}{2}\nabla_x \log p_t(x_t) \right] dt + h(t)d\mathbf{w}_t. \] The score function \( \nabla_x \log p_t(x_t) \) can be estimated using the pretrained neural network. Hence, generating samples only requires solving Eq.(7), which is guaranteed by Proposition 1. **Proposition 1** (The validity of the ER SDE, proof in Appendix A.1). When \( s_\theta(x_t, t) = \nabla_x \log p_t(x_t) \) for all \( x_t \), \( p_T(x_T) = p_T(x_T) \), the marginal distribution \( p_t(x_t) \) of Eq.(7) matches \( p_t(x_t) \) of the forward diffusion Eq.(4) for all \( 0 \leq t \leq T \). Eq.(7) extends the reverse-time SDE proposed in Song et al. (2021b); Zhang & Chen (2023); Xue et al. (2023); Karras et al. (2022). Specifically, in Song et al. (2021b), the noise scale \( g(t) \) added at each time step \( t \) of the reverse process is the same as that of the corresponding moment in the forward process. Zhang & Chen (2023); Xue et al. (2023); Karras et al. (2022) introduce a non-negative parameter to control the extent of noise added during the reverse process. However, the form of the noise scale is relevant to \( g(t) \). In contrast, our ER SDE introduces a completely new noise scale \( h(t) \) for the reverse process. This implies that the noise scale \( h(t) \) added during the reverse process may not necessarily be correlated with the scale \( g(t) \) of the forward process. Particularly, the ER SDE reduces to the reverse-time SDE in Eq.(4), Eq.(5) and the ODE depicted in Eq.(6) respectively when the specific values of \( h(t) \) are chosen. By expanding the reverse-time SDE, we not only unify ODEs and SDEs under a single framework, facilitating the comparative analysis of these two methods, but also lay the groundwork for designing more efficient SDE-based samplers. Further details are discussed in Sec.4 ### 3.2 VE ER-SDE-SOLVERS For the VE SDE, \( f(t) = 0 \) and \( g(t) = \sqrt{\frac{d\sigma_t^2}{dt}} \) (Kingma et al., 2021). The relationship between score prediction model \( s_\theta(x_t, t) \) and data prediction model \( x_\theta(x_t, t) \) is \( [x_t - x_\theta(x_t, t)]/\sigma_t^2 = s_\theta(x_t, t) \). By replacing the score function with the data prediction model, Eq.(7) can be expressed as \[ dx_t = \frac{1}{2\sigma_t^2} \left[ \frac{d\sigma_t^2}{dt} + h^2(t) \right] [x_t - x_\theta(x_t, t)] dt + h(t) dw_t. \] Denote \( dw_\sigma := \sqrt{\frac{d\sigma_t}{dt}} dw_t \), \( h^2(t) = \xi(t) \frac{d\sigma_t}{dt} \), we can rewrite Eq.(8) w.r.t \( \sigma \) as \[ dx_\sigma = \left[ \frac{1}{\sigma} + \frac{\xi(\sigma)}{2\sigma^2} \right] [x_\sigma - x_\theta(x_\sigma, \sigma)] d\sigma + \sqrt{\xi(\sigma)} dw_\sigma. \] We propose the exact solution for Eq.(9) using variation-of-constants formula (Lu et al., 2022b). **Proposition 2** (Exact solution of the VE SDE, proof in Appendix A.2). Given an initial value \( x_s \) at time \( s > 0 \), the solution \( x_t \) at time \( t \in [0, s] \) of VE SDE in Eq.(9) is: \[ x_t = \phi(\sigma_t) \phi(\sigma_s) x_s + \phi(\sigma_t) \int_{\sigma_s}^{\sigma_t} \phi^{(1)}(\sigma) \phi^2(\sigma) x_\theta(x_\sigma, \sigma) d\sigma + \sqrt{\sigma_t^2 - \sigma_s^2} \left[ \frac{\phi(\sigma_t)}{\phi(\sigma_s)} \right]^2 z_s, \] where \( z_s \sim N(0, I) \). \( \phi(x) \) is derivable and \( \int \frac{1}{\sigma} + \frac{\xi(\sigma)}{2\sigma^2} d\sigma = \ln \phi(\sigma) \). Notably, the nonlinear term in Eq.(10) involves the integration of a non-analytical neural network \( x_\theta(x_\sigma, \sigma) \), which can be challenging to compute. For practical applicability, Proposition 2 furnishes high-stage solvers (followed by González et al., 2023) for Eq.(10). **Proposition 3** (High-stage approximations of the VE SDE, proof in Appendix A.3). Given an initial value \( x_T \) and \( M + 1 \) time steps \( \{t_i\}_{i=0}^M \) decreasing from \( t_0 = T \) to \( t_M = 0 \). Starting with \( \tilde{x}_{t_0} = x_T \), the sequence \( \{\tilde{x}_{t_i}\}_{i=1}^M \) is computed iteratively as follows: \[ \tilde{x}_{t_i} = \phi(\sigma_{t_i}) \phi(\sigma_{t_{i-1}}) \tilde{x}_{t_{i-1}} + \left[ 1 - \phi(\sigma_{t_{i-1}}) \right] x_\theta(\tilde{x}_{\sigma_{t_{i-1}}}, \sigma_{t_{i-1}}) + \sqrt{\sigma_{t_i}^2 - \sigma_{t_{i-1}}^2} \left[ \frac{\phi(\sigma_{t_i})}{\phi(\sigma_{t_{i-1}})} \right]^2 z_{t_{i-1}} \] \[ + \sum_{n=1}^{k-1} x_\theta^{(n)}(\tilde{x}_{\sigma_{t_{i-1}}}, \sigma_{t_{i-1}}) \left[ \frac{(\sigma_{t_i} - \sigma_{t_{i-1}})^n}{n!} + \phi(\sigma_{t_i}) \int_{\sigma_{t_i}}^{\sigma_{t_{i-1}}} (\sigma - \sigma_{t_{i-1}})^{n-1} (n-1)! \phi(\sigma) d\sigma \right], \] where \( k \geq 1 \). \( x_\theta^{(n)}(x_\sigma, \sigma) := \frac{d^n x_\theta(x_\sigma, \sigma)}{d\sigma^n} \) is the \( n \)-th order total derivative of \( x_\theta(x_\sigma, \sigma) \) w.r.t \( \sigma \). The term \( \int_{\sigma_{t_{i-1}}}^{\sigma_{t_i}} (\sigma - \sigma_{t_{i-1}})^{n-1} (n-1)! \phi(\sigma) d\sigma \) in Eq.(11) lacks an analytical expression, and we resort to \( N \)-point numerical integration for estimation. The detailed algorithms refer to Appendix B. ### 3.3 VP ER-SDE-SOLVERS For the VP SDE, \( f(t) = \frac{d \log \alpha_t}{dt} \) and \( g(t) = \sqrt{\frac{d\sigma_t^2}{dt} - 2 \frac{d \log \alpha_t}{dt} \sigma_t^2} \) (Kingma et al., 2021). The relationship between the score prediction model \( s_\theta(x_t, t) \) and data prediction model \( x_\theta(x_t, t) \) is \[-[x_t - \alpha_t x_\theta(x_t, t)]/\sigma_t^2 = s_\theta(x_t, t).\] By replacing the score function with the data prediction model, Eq.(7) can be written as: \[dx_t = \left\{ \frac{1}{\sigma_t} \frac{d\sigma_t}{dt} + \frac{h^2(t)}{2\sigma_t^2} \right\} x_t - \left[ \frac{1}{\sigma_t} \frac{d\sigma_t}{dt} - \frac{1}{\sigma_t} \frac{d\alpha_t}{dt} + \frac{h^2(t)}{2\sigma_t^2} \right] \alpha_t x_\theta(x_t, t) dt + h(t) dw_t.\] (12) Let \(h(t) = \eta(t)\alpha_t\), \(y_t = \frac{x_t}{\alpha_t}\) and \(\lambda_t = \frac{\sigma_t}{\alpha_t}\). Denote \(dw_\lambda := \sqrt{\frac{d\lambda}{dt}} dw_t\), \(\eta^2(t) = \xi(t) \frac{d\lambda}{dt}\), we can rewrite Eq.(12) w.r.t \(\lambda\) as \[dy_\lambda = \left[ \frac{1}{\lambda} + \frac{\xi(\lambda)}{2\lambda^2} \right] [y_\lambda - x_\theta(x_\lambda, \lambda)] d\lambda + \sqrt{\xi(\lambda)} dw_\lambda.\] (13) Following Lu et al. (2022b), we propose the exact solution for Eq.(13) using the variation-of-constants formula. **Proposition 4** (Exact solution of the VP SDE, proof in Appendix A.4). Given an initial value \(x_s\) at time \(s > 0\), the solution \(x_t\) at time \(t \in [0, s]\) of VP SDE in Eq.(13) is: \[x_t = \alpha_t \phi(\lambda_t) x_s + \alpha_t \phi(\lambda_t) \int_{\lambda_t}^{\lambda_s} \phi^{(1)}(\lambda) \phi(\lambda) x_\theta(x_\lambda, \lambda) d\lambda + \alpha_t \sqrt{\lambda_t^2 - \lambda_s^2} \left[ \frac{\phi(\lambda_t)}{\phi(\lambda_s)} \right]^2 z_s,\] (14) where \(z_s \sim N(0, I)\). \(\phi(x)\) is derivable and \(\int \frac{1}{\lambda} + \frac{\xi(\lambda)}{2\sigma^2} d\lambda = \ln \phi(\lambda)\). The solution of the VP SDE also involves integrating a non-analytical and nonlinear neural network. Proposition 5 furnishes high-stage solvers (followed by Gonzalez et al., 2023) for Eq.(14). **Proposition 5** (High-stage approximations of the VP SDE, proof in Appendix A.5). Given an initial value \(x_T\) and \(M + 1\) time steps \(\{t_i\}_{i=0}^M\) decreasing from \(t_0 = T\) to \(t_M = 0\). Starting with \(\tilde{x}_{t_0} = x_T\), the sequence \(\{\tilde{x}_{t_i}\}_{i=1}^M\) is computed iteratively as follows: \[\tilde{x}_{t_i} = \alpha_{t_i} \phi(\lambda_{t_i}) \tilde{x}_{t_{i-1}} + \alpha_{t_i} \left[ 1 - \frac{\phi(\lambda_{t_i})}{\phi(\lambda_{t_{i-1}})} \right] x_\theta(\tilde{x}_{t_{i-1}}, \lambda_{t_{i-1}}) + \alpha_{t_i} \sqrt{\lambda_{t_i}^2 - \lambda_{t_{i-1}}^2} \left[ \frac{\phi(\lambda_{t_i})}{\phi(\lambda_{t_{i-1}})} \right]^2 z_{t_{i-1}}\] \[+ \alpha_{t_i} \sum_{n=1}^{k-1} x_\theta^{(n)}(\tilde{x}_{t_{i-1}}, \lambda_{t_{i-1}}) \left[ \frac{(\lambda_{t_i} - \lambda_{t_{i-1}})^n}{n!} + \phi(\lambda_{t_i}) \int_{\lambda_{t_i}}^{\lambda_{t_{i-1}}} (\lambda - \lambda_{t_{i-1}})^{n-1} d\lambda \right],\] (15) where \(k \geq 1\). \(x_\theta^{(n)}(x_\lambda, \lambda) := \frac{d^n x_\theta(x_\lambda, \lambda)}{d\lambda^n}\) is the \(n\)-th order total derivative of \(x_\theta(x_\lambda, \lambda)\) w.r.t \(\lambda\). Similarly, we employ \(N\)-point numerical integration to estimate \(\int_{\lambda_{t_i}}^{\lambda_{t_{i-1}}} (\lambda - \lambda_{t_{i-1}})^{n-1} d\lambda\) in Eq.(15). The detailed algorithms are proposed in Appendix B. ## 4 ELUCIDATING THE SOLUTION SPACE OF ER SDE This section primarily focuses on the solution space of the ER SDE. Specifically, in Sec.4.1, we provide a mathematical explanation for experimental observations made in previous research. Furthermore, we introduce various specialized Extended Reverse-Time SDE Solvers (ER-SDE-Solvers) in Sec.4.2, which achieve competitive rapid sampling performance compared to ODE solvers. ### 4.1 INSIGHTS ABOUT THE SOLUTION SPACE OF ER SDE Sec.3 demonstrates that the exact solution of the ER SDE comprises three components: a linear function of the data variables, a non-linear function parameterized by neural networks and a noise term. The linear and noise terms can be precisely computed, while discretization errors are present in the non-linear term. Due to the decreasing error as the stage increases (see Table 1), the first-order error predominantly influences the overall error. Therefore, we exemplify the case with order \(k = 1\) for error analysis. Specifically, the first-order approximation for VE SDE is given by \[\tilde{x}_{t_i} = \phi(\sigma_{t_i}) \phi(\sigma_{t_{i-1}}) \tilde{x}_{t_{i-1}} + \left[ 1 - \frac{\phi(\sigma_{t_i})}{\phi(\sigma_{t_{i-1}})} \right] x_\theta(\tilde{x}_{t_{i-1}}, \sigma_{t_{i-1}}) + \sqrt{\sigma_{t_i}^2 - \sigma_{t_{i-1}}^2} \left[ \frac{\phi(\sigma_{t_i})}{\phi(\sigma_{t_{i-1}})} \right]^2 z_{t_{i-1}},\] (16) Figure 3: FEI coefficients (a) and FID scores (b) for distinct noise scale functions. 1st-order solver is used here with the pretrained EDM. In the solution space of ER SDE, ODE solver shows minimal discretization error. ER SDE 4 exhibits discretization error that closely adheres to the behavior of the ODE. ER SDE 5 demonstrates elevated error in the initial 100 steps and gradually converges to the ODE’s error profile. Both ER SDE 4 and ER SDE 5 exhibit comparable efficiency to the optimal ODE solver. Image quality deteriorates for ill-suited noise scale functions (like ER SDE 2). and the first-order approximation for VP SDE is $$\tilde{x}_{t_i} = \frac{\phi(\lambda_{t_i})}{\phi(\lambda_{t_{i-1}})} \alpha_{t_i} \tilde{x}_{t_{i-1}} + \left[1 - \frac{\phi(\lambda_{t_i})}{\phi(\lambda_{t_{i-1}})}\right] x_\theta(\tilde{x}_{\lambda_{t_{i-1}}}, \lambda_{t_{i-1}}) + \sqrt{\lambda_{t_i}^2 - \lambda_{t_{i-1}}^2} \frac{\phi(\lambda_{t_i})}{\phi(\lambda_{t_{i-1}})} z_{t_{i-1}}.$$ (17) We observe that the discretization errors of both VE SDE and VP SDE are influenced by the First-order Euler Integral (FEI) coefficient $1 - \frac{\phi(x_t)}{\phi(x_s)}$, which is only determined by the noise scale function $\phi(x)$ introduced in the reverse process. As $\phi(x)$ is arbitrary, different noise scale functions correspond to different solutions, collectively forming the solution space of the ER SDE (here we borrow the concept of solution space from linear algebra [Leon et al., 2006]). ODE Solvers outperform SDE Solvers: Taking the first-order approximation of VE SDE as an example, an intuitive strategy for reducing discretization errors is to decrease the FEI coefficient. Due to $\frac{\phi(\sigma_t)}{\phi(\sigma_s)} \leq \frac{\sigma_t}{\sigma_s}$ (see A.7), the minimum value for the FEI coefficient is $1 - \frac{\sigma_t}{\sigma_s}$ rather than 0. Interestingly, when the FEI coefficient reaches its minimum value, the ER SDE precisely reduces to ODE (in this case, $\phi(\sigma) = \sigma$). This implies that the optimal solution of the ER SDE is exactly the ODE. Further analysis reveals that when $\phi(\sigma) = \sigma^2$, the ER SDE reduces to the reverse-time SDE. The detailed derivation can be found in Appendix A.6. As shown in Fig 3(a), the discretization error of the ODE solver is smaller than that of the SDE solver when the number of steps is the same. Fig 3(b) further illustrates that the ODE solver outperforms the SDE solver in achieving efficient sampling, consistent with [Zhang & Chen, 2023]. VP SDE Solvers achieve parity with VE SDE Solvers: The only difference between Eq.(16) and Eq.(17) lies in the latter being scaled by $1/\alpha_t$, but their relative errors remain the same. In other words, the performance of the VP SDE and the VE SDE solver is equivalent under the same number of steps and pretrained model. Directly comparing VP SDE and VE SDE solvers by experiments has been challenging in prior research due to the absence of a generative model simultaneously supporting both types of SDEs. This has led to divergent conclusions, with some studies [Song et al., 2021b] finding that VE SDE provides better sample quality than VP SDE, while others [Jolicoeur-Martineau et al., 2021] reaching the opposite conclusion. Fortunately, EDM [Karras et al., 2022] allows us for a fair comparison between VP SDE and VE SDE, as elaborated in Appendix C.3. 4.2 CUSTOMIZED FAST SDE SOLERS To further demonstrate how the noise scale function $\phi(x)$ directly impacts the efficiency of the sampling process, we initially provide three different forms of $\phi(x)$: ER SDE 1: $\phi(x) = x^{1.5}$, ER SDE 2: $\phi(x) = x^{2.5}$, ER SDE 3: $\phi(x) = x^{0.9} \log_{10}(1 + 100x^{1.5})$. Fig 3 illustrates that unfavorable choices of $\phi(x)$ (such as ER SDE 2) lead to significant discretization errors and inefficient sampling. Thus, it is crucial to carefully select the noise scale function $\phi(x)$ to achieve high-quality sampling with fewer steps. Since the optimal solution of ER SDE is ODE, our initial idea is to make the FEI coefficient as close as possible to the ODE case, i.e., $$\text{ER SDE 4: } \phi(x) = x \left( e^{-\frac{x}{2}} + 10 \right).$$ (18) Since Eq. (7) involves implicit Langevin diffusion, which can effectively correct any errors in the early sampling steps (Karras et al., 2022), it is reasonable to allow for a controlled amplification of errors when the number of steps is relatively small. Consequently, we propose an alternative ER SDE solver where $$\text{ER SDE 5: } \phi(x) = x(e^{x^{0.3}} + 10).$$ (19) Although ER SDE 5 exhibits more significant errors in the early stages (~100 steps) in Fig 3(a), its later-stage errors closely approach the minimum error (i.e., the error of the ODE). As a result, the image quality produced by ER SDE 5 becomes comparable to that of ER SDE 4 on the CIFAR-10 dataset (Krizhevsky et al., 2009), as shown in Fig 3(b). In fact, ER SDE 5 demonstrates superior performance on the ImageNet $64 \times 64$ dataset (Deng et al., 2009) (see Table 4). Therefore, we select ER SDE 5 as the noise scale function by default in the subsequent experiments. This strategy not only facilitates rapid sampling but also contributes to preserving the stochastic noise introduced by the reverse process, thereby enhancing higher-quality in the generated images. In fact, there are countless possible choices for $\phi(x)$, and we have only provided a few examples here. Researchers should select the suitable one based on specific application scenarios, as detailed in Appendix A.8. 5 EXPERIMENTS In this section, we demonstrate that ER-SDE-Solvers can significantly accelerate the sampling process of existing pretrained DMs. We vary the number of function evaluations (NFE), i.e., the invocation number of the data prediction model, and compare the sample quality between ER-SDE-Solvers of different stages and other training-free samplers. For each experiment, we draw 50K samples and employ the widely-used FID score (Heusel et al., 2017) to evaluate sample quality, where a lower FID typically signifies better sample quality. For detailed implementation and experimental settings, please refer to Appendix C. 5.1 DIFFERENT STAGES OF VE ER-SDE-SOLVERS AND VP ER-SDE-SOLVERS To ensure a fair comparison between VP ER-SDE-Solvers and VE ER-SDE-Solvers, we opt for EDM (Karras et al., 2022) as the pretrained model, as detailed in Appendix C.3. It can be observed from Table 1 that the image generation quality produced by both of them is similar, consistent with the findings in Sect. 4.1. Additionally, the high-stage ER-SDE-Solver-3 converges faster than ER-SDE-Solver-2, particularly in the few-step regime under 20 NFE. This is because higher stages result in more minor discretization errors. We also arrive at the same conclusions on the CIFAR-10 dataset (Krizhevsky et al., 2009), as can be found in Table 6. Table 1: Sample quality measured by FID↓ on ImageNet $64 \times 64$ for different stages of VE(P) ER-SDE-Solvers with EDM, varying the NFE. VE(P)-x denotes the x-th stage VE(P) ER-SDE-Solver. | Method | NFE | 10 | 20 | 30 | 50 | |--------|-----|------|------|------|------| | VE-2 | | 11.81| 3.67 | 2.67 | 2.31 | | VP-2 | | 11.94| 3.73 | 2.67 | 2.27 | | VE-3 | | 11.46| 3.45 | 2.58 | 2.24 | | VP-3 | | 11.32| 3.48 | 2.58 | 2.28 | 5.2 COMPARISON WITH OTHER TRAINING-FREE METHODS We compare ER-SDE-Solvers with other training-free sampling methods, including stochastic samplers such as SDE-DPM-Solver++ (Lu et al., 2022b), as well as deterministic samplers like DDIM (Song et al., 2021a), DPM-Solver (Lu et al., 2022a) and DPM-Solver++ (Lu et al., 2022b). Table 2 presents experimental results on the ImageNet $64 \times 64$ dataset (Deng et al., 2009) using the same pretrained model EDM (Karras et al., 2022). It is evident that ER-SDE-Solvers emerge Table 2: Sample quality measured by FID↓ on ImageNet $64 \times 64$ with the pretrained model EDM, varying the NFE. The upper right — indicates a reduction of NFE by one. | Sampling method\NFE | 10 | 20 | 30 | 50 | |--------------------|------|------|------|------| | Stochastic | | | | | | DDIM($\eta = 1$) (Song et al., 2021a) | 49.28 | 23.32 | 13.73 | 7.35 | | SDE-DPM-Solver++(2M) (Lu et al., 2022b) | 21.30 | 6.36 | 3.61 | 2.29 | | EDM-Stochastic (Karras et al., 2022) | 57.47 | 6.17 | 3.46 | 2.49 | | Ours(ER-SDE-Solver-3) | **11.32** | **3.45** | **2.58** | **2.24** | | Deterministic | | | | | | DDIM (Song et al., 2021a) | 17.33 | 6.45 | 4.33 | 3.15 | | EDM-Deterministic (Karras et al., 2022) | 35.59 | 3.56 | 2.60 | 2.34 | | DPM-Solver-3 (Lu et al., 2022a) | 6.88 | 3.02 | 2.56 | 2.35 | | DPM-Solver++(2M) (Lu et al., 2022b) | **6.48** | **2.98** | **2.56** | **2.35** | Table 3: Sample quality measured by FID↓ on class-conditional ImageNet $256 \times 256$ with the pretrained model Guided-diffusion (classifier guidance scale = 2.0, linear noise schedule), varying the NFE. | Sampling method\NFE | 10 | 20 | 30 | 50 | |--------------------|------|------|------|------| | Stochastic | | | | | | DDIM($\eta = 1$) (Song et al., 2021a) | 17.97 | 10.23 | 8.19 | 6.85 | | SDE-DPM-Solver++(2M) (Lu et al., 2022b) | 9.21 | 6.01 | 5.47 | 5.19 | | Ours(ER-SDE-Solver-3) | **6.24** | **4.76** | **4.62** | **4.57** | | Deterministic | | | | | | DDIM (Song et al., 2021a) | 8.63 | 5.60 | 5.00 | 4.59 | | DPM-Solver-3 (Lu et al., 2022a) | **6.45** | **5.03** | **4.94** | **4.92** | | DPM-Solver++(2M) (Lu et al., 2022b) | 7.19 | 5.54 | 5.32 | 5.16 | as the most efficient stochastic samplers, achieving a remarkable $2 \sim 8 \times$ speedup compared with previously state-of-the-art stochastic sampling methods, particularly when the NFE is limited. It is not surprising that image quality generated by ER-SDE-Solvers within the initial 20 NFE is relatively inferior to some deterministic samplers such as DPM-Solver, consistent with the theoretical justification in Sec. 4.1. However, our stochastic sampler leverages the noise introduced during the reverse process to rectify errors present in earlier steps. Notably, the FID drops to 2.24 when NFE is 50, surpassing even the deterministic samplers. Particularly, we combine ER-SDE-Solvers with classifier guidance to generate high-resolution images. Table 3 provides comparative results on ImageNet $256 \times 256$ (Deng et al., 2009) using Guided-diffusion (Dhariwal & Nichol, 2021) as the pretrained model. We surprisingly find that classifier guidance significantly improves the image generation quality of ER-SDE-Solvers even with very low NFE compared to ODE Solvers. This may be attributed to the customized noise injected into the sampling process, which mitigates the inaccuracies in data estimation introduced by classifier gradient guidance. Further investigation into the specific reasons is left for future work. Moreover, the images produced by ER-SDE-Solvers exhibit greater variability compared to deterministic samplers, as illustrated in Appendix D. We also provide comparisons on various datasets using different pretrained models in Appendix D. 6 CONCLUSION We address the challenges of fast and training-free sampling in DMs. Initially, we formulate the sampling problem as an ER SDE, which unifies ODEs and SDEs in previous studies. Leveraging the semi-linear structure of ER SDE solutions, we provide exact solutions and high-stage approximations for both VP SDE and VE SDE. Building upon it, we establish two crucial findings from a mathematical standpoint: the superior performance of ODE solvers for rapid sampling over SDE solvers, and the comparable performance of VP SDE solvers with VE SDE solvers. Finally, we introduce state-of-the-art stochastic fast samplers, ER-SDE-Solvers, by adeptly selecting noise scale functions for the sampling process. REPRODUCIBILITY STATEMENT For the theoretical findings of the proposed diffusion equation ER SDE and its solutions, comprehensive explanations and complete proofs are provided in Appendix A. For the algorithms of ER Solvers, pseudocodes are presented in Appendix B, while detailed experimental setups can be found in Appendix C. Our code utilizes pretrained checkpoints from https://github.com/NVlabs/edm and https://github.com/openai/guided-diffusion. Detailed code is made available in the supplementary material. As for the datasets employed in our experiments, CIFAR-10 (Krizhevsky et al., 2009), FFHQ (Karras et al., 2019), ImageNet (Deng et al., 2009) and LSUN (Yu et al., 2015) are publicly accessible datasets, ensuring transparency and reproducibility in our research. ETHICS STATEMENT In line with other advanced deep generative models like GANs, DMs can be harnessed to produce deceptive or misleading content, particularly in manipulated images. The efficient solvers we propose herein offer the capability to expedite the sampling process of DMs, thereby enabling faster image generation and manipulation, potentially leading to the creation of convincing but fabricated visuals. As with any technology, this acceleration could accentuate the potential ethical concerns associated with DMs, particularly their susceptibility to misuse or malicious applications. For instance, more frequent image generation might elevate the likelihood of unauthorized exposure of personal information, facilitate content forgery and dissemination of false information, and potentially infringe upon intellectual property rights. REFERENCES Brian DO Anderson. Reverse-time diffusion equation models. *Stochastic Processes and their Applications*, 12(3):313–326, 1982. Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended diffusion for text-driven editing of natural images. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 18208–18218, 2022. Fan Bao, Chongxuan Li, Jun Zhu, and Bo Zhang. Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. In *International Conference on Learning Representations*, 2022. Hyungjin Chung, Byeongsu Sim, and Jong Chul Ye. Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 12413–12422, 2022. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. *Advances in neural information processing systems*, 34:8780–8794, 2021. Sicheng Gao, Xuhui Liu, Bohan Zeng, Sheng Xu, Yanjing Li, Xiaoyan Luo, Jianzhuang Liu, Xiantong Zhen, and Baochang Zhang. Implicit diffusion models for continuous super-resolution. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 10021–10030, 2023. Martin Gonzalez, Nelson Fernandez, Thuy Vinh Dinh Tran, Elies Gherbi, Hatem Hajri, and Nader Masmoudi. SEEDS: Exponential SDE solvers for fast high-quality sampling from diffusion models. In *Thirty-seventh Conference on Neural Information Processing Systems*, 2023. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. *Advances in neural information processing systems*, 27, 2014. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. *Advances in neural information processing systems*, 30, 2017.
T2ToleSDk6
The biggest concern I have is the way this paper models cost values. It approximates cost value as $\lambda_d (D_f(d\| d^E)-\epsilon)$. This is very problematic since being sub-optimal does not necessarily mean it is unsafe or a cost violation. Matching with expert occupancy measures is essentially doing some sort of inverse RL on rewards rather than conventional constraint learning.
Learning Constraints from Offline Dataset via Inverse Dual Values Estimation Anonymous authors Paper under double-blind review Abstract To develop safe control strategies, Inverse Constrained Reinforcement learning (ICRL) infers constraints from expert demonstrations and trains policy models under these constraints. Classical ICRL algorithms typically adopt an online learning diagram that permits boundless exploration in an interactive environment. However, in realistic applications, iteratively collecting experiences from the environment is dangerous and expensive, especially for safe-critical control tasks. To address this challenge, in this work, we present a novel Inverse Dual Values Estimation (IDVE) framework. To enable offline ICRL, IDVE dynamically combines the conservative estimation inherent in offline RL and the data-driven inference in inverse RL, thereby effectively learning constraints from limited data. Specifically, IDVE derives the dual values functions for both rewards and costs, estimating their values in a bi-level optimization problem based on the offline dataset. To derive a practical IDVE algorithm for offline constraint inference, we introduce the method of 1) handling unknown transitions, 2) scaling to continuous environments, and 3) controlling the degree of sparsity regularization. Under these advancements, empirical studies demonstrate that IDVE outperforms other baselines in terms of accurately recovering the constraints and adapting to high-dimensional environments with diverse reward configurations. 1 Introduction In order to deploy Reinforcement Learning (RL) algorithms to solve safety-critical applications, the control policy must conform to some underlying constraints in the environment (Liu et al., 2021). However, in many real-world tasks, due to the inherent complexity of environmental dynamics, the optimal constraint is often time-varying, context-sensitive, and rooted in the human experience. These constraints are difficult to specify manually with prior knowledge and may not be readily available to RL agents in policy learning. To resolve these problems, Inverse Constraint Reinforcement Learning (ICRL) considers inferring constraints from the expert demonstrations. Existing ICRL algorithms (Scobee & Sastry, 2020; Malik et al., 2021; Liu & Zhu, 2022; Gaurav et al., 2023; Liu et al., 2023; Papadimitriou et al., 2023) follow an online learning paradigm where the agent can explore and collect experience from an interactive environment. However, the boundless exploration often involves unsafe behaviors that potentially violate the underlying constraints. Such a shortcoming is fundamental since the violation of constraint contradicts the primary goal of safe control and may cause significant loss in practical applications, especially for the algorithms that require a large number of training samples. An effective method to overcome the above limitations is designing an offline ICRL algorithm that relies only on offline datasets for constraint inference. This task is challenging, primarily due to 1) offline datasets can cover only a partial knowledge of the training environment and the algorithm must learn to handle the lack of knowledge in unvisited states. 2) without actively exploring specific actions and observing their outcomes, the offline ICRL algorithm must accurately identify the unsafe behaviors by relying only on the offline dataset. To address the aforementioned challenges in offline ICRL, in this work, we propose an Inverse Dual Value Estimation (IDVE) framework. IDVE reformulates constraint inference as a regularized policy learning problem, thereby ensuring both the safety and conservatism of a control strategy. This is achieved by regularizing the deviation from the expert policy and incentivizing the agent to operate within the distribution of offline datasets. By leveraging the Lagrange duality, we deduce an analytic solution for our problem by embedding the optimal visitation distribution into value functions of costs and rewards. Such functions, being sensitive to the agent’s performance in reward maximization and cost avoidance, provide an effective quantification of the agent’s behavioral feasibility. To learn these value functions from datasets, we derive a bi-level optimization objective that alternatively updates the value functions for rewards and costs in an offline diagram. For practical usage, we design a IDVE algorithm that can effectively handle the incomplete or unknown transition, scale to a continuous environment, and control the sparsity of learned constraints. These advancements enable IDVE to significantly outperform other baselines by inferring more accurate constraints and safer control policies. We conduct an in-depth evaluation to study its performance under various hyperparameters and how well the constraints transfer to new environments. Our main contributions are as follows: 1) To the best of our knowledge, there is no prior solution for offline ICRL. As the first attempt, our IDVE framework derives dual value functions from regularized policy learning and proposes the bi-level optimization method for updating value functions, which could serve as a strong cornerstone for future advancements. 2) We design an IDVE algorithm that solves critical problems caused by the continuous environment, unknown transitions, and scale of constraints. These advancements bridge IDVE to accomplish practical applications. 2 RELATED WORK Inverse Constrained Reinforcement Learning (ICRL). Prior ICRL methods extend the maximum entropy framework (Ziebart et al., 2008) for learned constraints from both the expert demonstrations and the interactive MDP environment (without the constraints). In the discrete state-action space, some recent research (Scobee & Sastry, 2020; McPherson et al., 2021) inferred the constrained sets for recording infeasible state-action pairs in Constrained MDP, but these studies were restricted to the environments with known dynamics. A subsequent work (Malik et al., 2021) extended this approach to continuous state-action spaces with unknown transition models by utilizing neural networks to approximate constraints. To enable constraint inference in stochastic environments, (Papadimitriou et al., 2023) inferred probability distributions over constraints by utilizing the Bayesian framework, and (Baert et al., 2023) incorporate the maximum causal entropy objective (Ziebart et al., 2010) into ICRL. Some recent works explore ICRL under different settings, e.g., (Gaurav et al., 2023) extended ICRL to infer soft constraints, and (Liu & Zhu, 2023) explored ICRL under the multi-agent setting. Striving for efficient comparisons, (Liu et al., 2023) established an ICRL benchmark across various RL domains. However, these algorithms primarily target online ICRL that infer constraints by interacting with environments instead of with only offline datasets. Offline Reinforcement Learning. Offline RL utilizes a data-driven RL paradigm where the agent learns the control policy exclusively from static datasets of previously collected experiences (Levine et al., 2020). To mitigate the distributional shift between training samples and testing data, previous offline RL solutions commonly involve constraining the learned policy to the data-collecting policy (Fujimoto et al., 2019; Kumar et al., 2019), making conservative estimates of future rewards (Kumar et al., 2020; Yu et al., 2021), and developing uncertainty-aware action selector (Janner et al., 2019; Kidambi et al., 2020). Some recent advancements on Offline RL (Sikchi et al., 2023; Xu et al., 2023) studied a regularized policy optimization problem with convex objective and linear constraints. A recent IRL work (Yue et al., 2023) considered recovering conservative rewards from offline datasets, but none of these methods has studied the offline constraint inference. 3 PROBLEM FORMULATION Constrained Reinforcement Learning (CRL). A CRL problem is commonly based on a stationary Constrained Markov Decision Process (CMDP) $\mathcal{M} \cup c := (S, A, p_T, r, c, \epsilon, \rho_0, \gamma)$ where: 1) $S$ and $A$ denote the space of states and actions. 2) $p_T \in \Delta_S^S \times A$ defines the transition distributions. 3) $r : S \times A \rightarrow [0, R_{\text{max}}]$ and $c : S \times A \rightarrow [0, C_{\text{max}}]$ denotes the reward and cost functions. 4) $\epsilon \in \mathbb{R}_+$ denotes the bound of cumulative costs. 5) $\rho_0 \in \Delta_S$ denotes the initial states distribution. 6) $\gamma \in [0, 1)$ is the discount factor. The goal of CRL policy $\pi \in \Delta_A^S$ is to maximize the expected discounted rewards under known constraints: $$\arg \max_{\pi} \mathbb{E}_{p_T, \pi, \rho_0} \left[ \sum_{t=0}^{T} \gamma^t r(s_t, a_t) \right] \text{ s.t. } \mathbb{E}_{p_T, \pi, \rho_0} \left[ \sum_{t=0}^{T} \gamma^t c(s_t, a_t) \right] \leq \epsilon$$ $\Delta_X^Y$ denotes the probabilistic simplex in the space $X$, and $\Delta_Y^X$ denotes a function maps $Y$ to $\Delta_X$. Following the setting in Malik et al. (2021), we are mainly interested in the hard constraints such that \( \epsilon = 0 \). Striving for clarity, we define the CMDP with the known cost as \( M \cup c \), and the CMDP without cost (i.e., CMDP\(\backslash c\)) as \( M \). Accordingly, the visitation distribution \( d^\pi \in \Delta^{S \times A} \) (i.e., the normalized occupancy measure) produced by policy \( \pi \) can be denoted as: \[ d^\pi(s, a) = (1 - \gamma)\pi(a|s) \sum_{t=0}^{\infty} \gamma^t p(s_t = s|\pi) \] where \( p(s_t = s|\pi) \) defines the probability of arriving state \( s \) at time step \( t \) by performing policy \( \pi \). **Inverse Constraint Reinforcement Learning.** Note that traditional CRL problems often assume the constraint signals \( c(\cdot) \) are directly observable from the environment, but in real-world problems, instead of observing the constraint signals, we often have access to expert demonstrations \( D_E \) that adhere to these constraints, and the agent is required to recover the constraint models from the dataset. This task is challenging because various combinations of rewards and constraints can explain the same expert demonstrations. Striving for the identifiability of solutions, ICRL algorithms Malik et al. (2021); Liu et al. (2023); Papadimitriou et al. (2023) typically assume that reward signals are observable and the goal is to recover only the constraints, in contrast to Inverse Reinforcement Learning (IRL) Ziebart et al. (2008), which aims to learn rewards from an unconstrained MDP. **Identifiability Issue.** Similar to IRL, the optimal constraint in ICRL is not uniquely identifiable, indicating that multiple constraints may equivalently explain the expert behaviors. To address this issue, ICRL algorithms aim to learn the minimal constraints under which the imitation agent can reproduce the behaviors of the expert Scobee & Sastry (2020). These constraints are defined so as to prohibit risky movements that could yield cumulative rewards exceeding those obtained by the expert. This is because we assume that experts optimally maximize rewards within their constraints. Hence, if an agent surpasses an expert’s rewards, it indicates inherent risk in the move. **From Online to Offline ICRL.** Classic ICRL algorithm typically follows an online learning paradigm where the agent iteratively collects experience by interacting with the environment and using that experience for updating constraints and policy. Nevertheless, in many realistic settings, online interaction is impractical, either because data collection is expensive (e.g., in robotics, educational agents, or healthcare) or dangerous (e.g., in autonomous driving, or healthcare). To extend ICRL to the offline setting, we formally define the problem of offline ICRL as follows: **Definition 1. (Offline ICRL)** Let \( D^E = \{s_n^E, a_n^E, r_n^E\}_{n=1}^{N_E} \) denote the expert dataset generated by the agent adhering to the unobserved ground-truth constraints. Let \( D^{-E} = \{s_n, a_n, r_n\}_{n=1}^{N-E} \) denote the sub-optimal dataset generated by the agent without knowing the ground-truth constraints. Given an offline dataset \( D^O = \{D^E, D^{-E}\} \) and the threshold \( \hat{\epsilon} \), an offline ICRL problem requires estimating the cost function \( \hat{c}(\cdot) \) such that the reward-maximising policy \( \hat{\pi} \) learned under the inferred constraint can reproduce expert demonstration \( D_E \). The challenge of solving an offline ICRL problem lies in the absence of an MDP \( M \) that the algorithm can interact with, more specifically, - To infer the correct constraint, traditional online ICRL algorithms rely on active exploration of the environment for identifying the unsafe trajectories that yield larger cumulative rewards compared to expert ones. However, offline ICRL algorithms have no access to the environment. - The demonstration dataset \( D_o \) captures only the partial information of the environment, and thus the offline ICRL algorithms must learn a conservative constraint and policy representation, thereby mitigating the influence of epistemic uncertainty due to the incomplete knowledge. **4 CONSTRAINT INFERENCE VIA DUAL REINFORCEMENT LEARNING** The offline forward constraint-solving function is defined by the regularized policy learning objective. We use \( d^O \) to represent the visitation distribution in the offline dataset \( D^O \), and \( D_f(\cdot || \cdot) \) denotes the \( f \)-divergence between two distributions. Instead of maximizing the reward, we augment the reward with a divergence regularizer to prevent it from deviating beyond the coverage of the offline data. This guarantees that the agent adheres to a conservative policy. Specifically, we aim to maximize \[ J(\pi) = \mathbb{E}_{d^\pi(s,a)}[r(s,a)] - \xi_r D_f(d^\pi(s,a) || d^O(s,a)) \quad \text{s.t. } d^\pi(s,a)c(s,a) \leq \epsilon \quad \forall s,a \] (3) Motivated by the computational efficiency, and inspired by Sikchi et al. (2023), we reformulate the aforementioned objective (3) into a convex problem. Specifically, we aim to identify a visitation distribution that adheres to the Bellman-flow constraints: \[ \max_{d(s,a)c(s,a)\leq 0,d(s,a)\geq 0} \mathbb{E}_d[r(s,a)] - \xi_r D_f(d || d^O) \] subject to \[ \sum_{a\in A} d(s,a) = (1-\gamma)d_0(s) + \gamma \sum_{(s',a')} d(s',a')p(s'|s,a'), \forall s \in S \] To derive a solution for the problem (4), we introduce the dual variables \(V^r\) to consider the Lagrangian dual problem. Please find the detailed derivation in appendix B.1. \[ \min_{V^r} \max_d \mathbb{E}_d[\delta^r_V] - \xi_r D_f(d || d^O) + (1-\gamma)\mathbb{E}_{d_0}[V^r] \text{ s.t. } d(s,a)c(s,a) \leq \epsilon \text{ and } d(s,a) \geq 0 \] where \(\delta^r_V(s,a) = r(s,a) + \sum_{s'\in S} p_T(s'|s,a)\gamma V^r(s') - V^r(s)\). Intuitively, \(\delta^r_V(s,a)\) defines the temporal difference error in policy evaluation and the advantages in policy improvement respectively. In this study, we focus on hard constraints, denoted by \(\epsilon = 0\). This implies that the feasible set is constrained by the condition \(d(s,a)c(s,a) \leq 0\) for all \(s,a\). We assume \(c(s,a)\) is derived from the state function \(c(s')\), indicating state safety, with \(c(s,a) = \mathbb{E}_{s'\sim P(s'|s,a)}[c(s')]\). We provide the closed-form solution for the inner optimization problem in Equation (5) when \(\epsilon = 0\). Note that a similar derivation under the non-constraint case can be found in Lee et al. (2021); Sikchi et al. (2023). **Proposition 1.** Assume the following: 1) The learned visitation distribution \(d(s,a) > 0\) for all \((s,a)\) such that \(d^O(s,a) > 0\). 2) The range of the derivative of the function \(f\), i.e., \(f'\), includes 0. In other words, there exists some \(x \geq 0\) s.t. \(f'(x) = 0\). The optimal solution for the inner optimization problem in (5), denoted as \(d^*\), is given by: \[ d^*(s,a) = d^O(s,a)1_{c(s,a)=0}w^*_f(\delta^r_V(s,a)) \] Substituting \(d^*\) into problem (5) the problem becomes: \[ \min_{V^r} \mathbb{E}_{d^O} \left[ \xi_r 1_{\delta^r_V(s,a)=0} f^*_p(\frac{\delta^r_V(s,a)}{\xi_r}) \right] + (1-\gamma)\mathbb{E}_{d_0}[V^r] \] where \(w^*_f\) and \(f^*_p\) is related to the convex function \(f\) specified in \(f\)-divergence: \(w^*_f(y) = \max(0,f'^{-1}(y/\xi_r)), f^*_p(y) = \mathbb{E}_{w^*}[y] - f(w^*(y))\). The proof is in Appendix B.2. Following Proposition 1, we observe that the inner optimization problem projects every unsafe \(\omega^*_f(\delta^r_V(s,a))\) to \(d^*(s,a) = 0\). Now, considering the problem of inversely learning the cost \(c(s,a)\) from expert demonstration \(d^E\), the following proposition provides conditions for \(c(s,a)\) to solve the inverse problem, ensuring that \(d^E\) is the solution to problem (3). **Definition 2.** The set of optimal visitation distributions is defined as those distributions that satisfy Bellman flow constraints and achieve a higher cumulative reward with a regularizer, denoted as \(O = \{d : J(d) > J(d^E)\} \cap \{d : \sum_{a\in A} d(s,a) = (1-\gamma)d_0(s) + \gamma \sum_{(s',a')} d(s',a')p(s'|s,a'), \forall s \in S\}\). **Proposition 2.** The \(c(s,a)\) is a feasible solution for the inverse constraint learning problem if and only if 1) For every \(d \in O\), there exists at least one \(c(s,a) > 0\) when \(d(s,a) > 0\) and 2) \(c(s,a) = 0\) for all \(d^E(s,a) > 0\). Motivated by this proposition, we formalize our inverse constraint learning problems as a bi-level optimization problem. Initially, we solve the forward constraint equation using the formula (5), which can be viewed as sampling from the optimal visitation distribution \(O\). Once the solution \(V^r\) is obtained, representing the optimal solution under a learned cost, we update the cost function to ensure \(c(s,a) > 0\) for some \((s,a)\) such that \(d^*(s,a) > 0\), derived from \(d^*(s,a) = d^O(s,a)w^*_f(\delta^r_V(s,a))\). This iterative process is repeated until convergence. We define the objective function for the bi-level optimization problem as follows: \[ \min_{V^r} \mathbb{E}_{d^O} \left[ \xi_r 1_{\delta^r_V(s,a)\leq 0} f^*_p(\frac{\delta^r_V(s,a)}{\xi_r}) \right] + (1-\gamma)\mathbb{E}_{d_0}[V^r] \] \[ \max_{V^c} \sum_{(s,a)\in T^c} w^*_f(\delta^r_V(s,a) - (\delta^r_V(s,a))_+) - \sum_{(s,a)\in T^c} w^*_f(\delta^r_V(s,a) - (\delta^r_V(s,a))_+) \] where \( \delta_{V}^c(s,a) = \sum_{p_T(s'|s,a)} \gamma V^c(s') - V^c(s) \) and \( V^c(s) = \mathbb{E}_{\pi,p_T,\rho_0}[\sum_t \gamma^t c(s_t,a_t)|s_0=s] \) defines the cost value function. We use \( (x)_+ \) to represent \( \max(x,0) \). The state-action set visited by the expert is denoted as \( T^e = (s,a) : d^E(s,a) > 0 \), and the set not visited by the expert is denoted as \( T^w = (s,a) : d^E(s,a) = 0 \). In this formula: 1. We use the advantage of the cost function \( \delta_{V}^c(s,a) \) to represent the cost function \( c(s,a) \), aligning with our setting where \( c(s,a) = \sum_{p_T(s'|s,a)} \gamma V^c(s') - V^c(s) \) should solely depend on the future state and not the action. 2. When using the gradient method to update [9], we maximize \( \delta_{V}^c \) to 0 when \( d^E(s,a) > 0 \). This ensures that \( d^E(s,a) > 0 \) leads to \( \delta_{V}^c = 0 \). Furthermore, by minimizing \( \delta_{V}^c \) to 0 where \( d^E(s,a) = 0 \), \( V^c \) will have a gradient only when \( \omega_f(\delta_{V}^c) > 0 \). This guarantees an increase in the cost value in the state-action pair where \( d^c(s,a) > 0 \). We can prove that, following such a learning process, the optimizing process can maintain the learned optimal occupancy \( d^* \) in each round as a better policy than \( d^E \) with respect to \( J(d) \). See appendix B.4. ### 4.1 Analysis of the IDVE A critical challenge in ICRL is finding the minimum constraint that can explain the behavior of expert agents [Scobee & Sastry, 2020; Malik et al., 2021; Lu et al., 2023]. Developing a sparse constraint is critical for fostering generalizability. Without sparsity, one can learn an excessively restrictive constraint that blocks all the movement not covered by the expert dataset. While such a constraint may accurately reflect the observed expert behaviors, it lacks practical utility because: 1) the expert dataset might not record all possible movements, and 2) it cannot be generalized to environments with even minor modifications to dynamics (see experiments in Section 6.2), which commonly appears in bridging the Sim-to-Real gap in practice. To achieve this goal, it is important to encourage the sparsity of constraints. We claim that IDVE achieves sparsity by identifying only those constraint-violating movements that yield high rewards, aligning with the goals of ICRL (see Section 3). As an illustrative example, we consider chi-square divergence for \( D_f \): \[ f = (x - 1)^2 \quad \text{and} \quad f'(x) = 2(x - 1) \quad \text{and} \quad f'^{-1}(x) = 1 + \frac{x}{2} \] \[ w_f^*(x) = 1(x > -2\xi_r) \left( 1 + \frac{x}{2\xi_r} \right) = \left( 1 + \frac{x}{2\xi_r} \right)_+ \] In the cost learning step, if the temporal difference \( \delta_{V}^c(s,a) \leq -2\xi_r \) for any \((s,a)\) pair, indicating negligible future rewards for the action in that state, our objective ensures \( \nabla w_f^*(\cdot) = 0 \) in the cost learning objective (equation 9). Consequently, the cost-value function \( V^c(s) \) remains unchanged in the low-reward state-action pairs, which encourages the sparsity of updates in \( V^c(s) \). Intuitively, this method makes the cost function only sensitive to regions exhibiting constraint-violating behaviors, particularly those associated with rewards exceeding the expert level. In the forward solving step, for any state-action transition \((s,a,s')\) with nonzero cost \( c(s,a) \), \( 1_{\delta_{V}^c \leq 0} \) will block the update of \( V^r(s) \) from \( V^r(s') \). This acts as a safe Bellman operator, defining the optimal value function \( V^*(s) \) as \( V^{\text{safe}}(s) = \max_{a \in A} \left[ r(s,a) + \gamma P(s'|s,a)V^{\text{safe}}(s') \right] \). An example of the sparse constraint. We take the gridworld as an example. If \( \xi_r = \frac{1}{2} \), as shown in Figure 1, for \((s,a,s') = ([4,2], 'move up', [4,1]), since \( V^r([4,2]) - V^r([4,1]) < 0 \) and \( r([4,1], 'move up') = -1 \), we can conclude that \( \delta_{V}^c([4,2], 'move up') = V^r([4,2]) - V^r([4,1]) + r([4,1], 'move up') < -1 \). Therefore, \( \delta_{V}^c - \max(\delta_{V}^c, 0) \) will be clipped by the indicator \( 1(x > 2\xi_r) \) during the computation of \( w^*(x) \) in objective (9). Thus, the value of \( V^c([4,2]) \) and \( V^c([4,1]) \) remains unchanged. However, for \(([4,2], 'move down', [4,3]), \( \delta_{V}^c([4,2], 'move down') = V^r([4,3]) - V^r([4,2]) + r([4,2], 'move down') > -1 \), so Figure 1: An example of recovery \( V^r \) and \( V^c \) under the experiment setting 1 in Section 6.1. \([x,y]\) denotes the state, and the action is the moving direction (e.g., move right). the gradient in \( \delta_{V}^c \) will increase the value of \( V^c([4,3]) \), marking \([4,3]\) as an unsafe space. In the next optimizing forward problem, \([4,3]\) will be blocked from back-propagating the gradient to any other \( V^r \), to ensure a safe policy improvement. 5 Practical Implementation In this section, we introduce the practical implementation of IDVE (see Algorithm 1) by proposing the following key updates to our IDVE objective (8) and (9). We use \( \chi^2 \)-square divergence as \( D_f \). 5.1 Rewriting of Forward Optimization Problem Following Sikchi et al. (2023), we simplify the bi-level optimization objective (8) by replacing \( \xi_r \) and \( \gamma \) with temperature parameter \( \lambda \) and sparsity parameter \( \alpha \), becomes: \[ \max_{V^c} \sum_{(s,a) \in T^c} (\delta_{V}^c(s,a) - (\delta_{V}^c(s,a))_+ - \alpha)_+ - \sum_{(s,a) \in T^v} (\delta_{V}^r(s,a) - (\delta_{V}^r(s,a))_+ - \alpha)_+ \] \[ \min_{V^r} \lambda E_{d^O} \left[ 1_{\delta_{V}^c(s,a) \leq 0} f_p^*(\delta_{V}^r(s,a)) \right] + (1 - \lambda) E_{d^v}[V^r] \] Intuitively, \( \lambda \) governs the level of conservatism in optimization, representing the tradeoff between maximizing immediate rewards (first term) and aligning with offline data (second term). On the other hand, \( \alpha \) dictates the lower bound for clipping and update equations, capturing the trade-off associated with the sparsity of constraint recovery. 5.2 Scaling to Continuous Environment Sampling from \( d^E(s,a) = 0 \) is challenging due to limited expert trajectory samples, leaving some \((s,a)\) pairs absent. We address this by sampling \((s,a)\) from \( d^*(s,a) \) whenever \((s,a^c)\) exists in expert trajectory samples. We store the state-action pairs that significantly deviate from expert behavior in the replay buffer \( B^v \). To implement this, we use a Gaussian representation for the actor-network \( \pi_\psi \sim N(\mu,\sigma) \) to extract the policy from learned \( V^r \) and \( Q^r \). The maximization of log likelihood under optimal state-action visitation is expressed as: \[ \max_{\psi} E_{s,a \sim d^*} \left[ 1_{\delta_{V}^c(s,a) \leq 0} \omega_f^*(\delta_{V}^r(s,a)) \log \pi_\psi(s,a) \right] (\delta_{V}^c \text{ and } \delta_{V}^r \text{ denote advantages in policy update}) \] We optimize: \[ \max_{V^c} E_{d^E}[(\delta_{V}^c - \max(\delta_{V}^c,0))] - \alpha)_+ - E_{d^v}[(\delta_{V}^r - \max(\delta_{V}^r,0))] - \alpha)_+ \] Since the actions in the violation buffer result in higher rewards but are also more likely to lead to unsafe states, the reduction of \( w_f^*(\delta_{V}^r(s,a) - \max(\delta_{V}^c(s,a),0)) \) in \( d^v \) effectively increases the values of \( V^c(s') \) for states followed by state actions pair \((s,a)\) stored in the violation action replay buffer. 5.3 Tackling Unknown Transitions in the Offline Learning In our IDVE algorithm, the computing of \( \delta_{V}^r(s,a) \) and \( \delta_{V}^c(s,a) \) function requires complete knowledge of the transition \((s,a,r,s')\). In value update, online learning algorithms can interact with the environment to explore the resulting states \( s' \) by performing an action \( a \) on the state \( s \). However, in the offline setting, the dataset might not cover the returns of performing a specific action, and \( s' \) becomes unavailable without the interactive environment. To circumvent this issue, we define: \[ \delta_{V}^r(s,a) = Q^r(s,a) - V^r(s) \quad \text{and} \quad \delta_{V}^c(s,a) = Q^c(s,a) - V^c(s) \] Since the reward signals are known in the offline ICRL dataset, we adopt a semi-gradient update rule to update \( Q^r \). Specifically, we fix \( V^r \) and update \( Q^r \) by: \[ \min_{\phi^r} E_{(s,a,s') \sim D} \left[ (Q^r_{\phi^r}(s,a) - (r(s,a) + \gamma V^r_{\theta^r}(s')))^2 \right] \] where \( \phi^r \) and \( \theta^r \) denote the parameters of \( Q \) and \( V \) functions. For cost value learning, we update \( Q^c \) using the equation in (12) and approximate \( V^c(s') \) with the maximum possible learned cost \( Q \) function to propagate high learned cost in violation action to an unsafe state in the offline dataset, i.e., \( V^c(s') = \max_{(s,a,s') \in D^O} [\gamma Q^c_{\phi^c}(s,a)] \). This update rule ensures the consistency of \( c(s,a) \) across different \((s,a)\) pairs that lead to same destination \( s' \). 6 Empirical Results Running Settings. By following Malkin et al. (2022), we adopt the following evaluation metric: 1) Constraint Violation Rate, which assesses the likelihood of a policy violating a constraint in a given Algorithm 1: Inverse Dual Values Estimation (IDVE) Require: Offline dataset $D^O = \{D^E, D^{-E}\}$, Running iterations $I$; 1: Initialize $Q_{\phi^r}^c = 0$, $V_{\theta^c}^c = 0$; 2: Run offline RL to warm up $Q_{\phi^r}^r$, $V_{\theta^r}^r$, $\pi_\psi$, and the violating action replay buffer $B_v = \{\emptyset\}$; 3: for $i = 1 \ldots I$ iterations do 4: Sample violate action $a_v$ from $a_v \sim \mathbb{E}_{s \sim D}[\pi_\psi(s)]$, add $(s, a_v)$ to buffer $B_v$; 5: Update $Q_{\phi^r}^r$ by minimizing the TD error [16] with dataset $D^O$; 6: Update $V_{\theta^r}^r$ with the dual value objective [13] and dataset $D^O$; 7: Update $Q_{\phi^c}^c$ by minimizing the behavioral gap to experts (objective [14]) with dataset $D^O$; 8: Update $V_{\theta^c}^c$ with the objective $\min_{\phi^c} \mathbb{E}_{(s,a,s') \sim D^O} \left[ \min(\gamma V_{\theta^c}^c(s') - Q_{\phi^c}^c(s, a), 0) \right]$; 9: Update $\pi_\psi$ with $\max_{\psi} \mathbb{E}_{s,a \sim d^O} \left[ \mathbb{I}_{S_V^*(s,a) \leq 0} \omega_f^*(S_V^*(s,a)) \log \pi_\psi(s,a) \right]$; 10: end for trajectory, and 2) Feasible Cumulative Rewards, which calculates the total rewards accumulated by the agent before violating any constraints. 3) the success rate of reaching the destination for the grid-world environment. We run experiments with 5 different seeds and present the mean ± std results for each algorithm. Appendix A.3 reports the detailed settings and random seeds. Comparison Methods. Due to the lack of offline ICRL baselines, we mainly compare IDVE with its variants and other relevant offline control methods, including 1) IDVE w/o S removes the control of sparsity by removing the clipping term in function (12). 2) IDVE w/o A excludes the violating action buffer $B_v$ from objective (9) by following only the expert density for learning cost values. 3) Offline IL follows the Inverse Soft Q-Learning [Garg et al., 2021] method that infers reward value functions from offline data to imitate expert policy. 4) Offline RL refers to the recently proposed f-DVL [Sikchi et al., 2023] algorithm that leverages the dual and offline reward function to control. 6.1 Discrete Environment We utilize grid-world environments for evaluating our algorithm. These environments consist of 7x7 discrete maps, each designed with four unique constraint map settings (Figure 2). Within these environments, each agent is permitted to perform eight actions: moving up, down, right, left, or in one of four diagonal directions. The primary objective for every agent is to navigate from the start to the end point, taking the shortest possible route while avoiding specific constrained states. The agent receives a -1 reward for each step taken until it reaches its destination. To enable offline ICRL, we provide an expert dataset $D^E$ and a sub-optimal dataset $D^{-E}$ (collected random walks) for each environment. The size of the offline dataset $D^O = \{D^E, D^{-E}\}$ is 50 (trajectories) (Check details in Appendix A.1). Figure 2: Four settings in grid-world. Blue, red, and black mark the starting, target, and absorbing states. The algorithms should infer the constrained region (gray) from demonstrations. Figure 3: The visualization for the recovered constraints in four settings. Constraint Visualization. Figure 3 visualizes the normalized value functions learned by different methods. By comparing with the ground-truth constraints in Figure 2, we find our IDVE successfully identifies the most sparse constraint, which plays a critical role in enabling the agent to navigate safely to its intended destination. An important phenomenon is that the learned cost values of IDVE \( w/o S \) significantly become denser, thereby constraining lots of safe regions. It reflects the necessity of applying a sparsity regularizer. The value function learned by the imitation model fails to capture the accurate constraint. **Sensitivity to hyper-parameters.** We conduct an in-depth study to investigate how the key parameters (\( \lambda \) and \( \alpha \)) in IDVE influence the performance of constraint recovery. Figure 4 visualizes the results. By scaling the regularizing \( \alpha \) from 0 to \(-\infty\) (\(-\infty\) indicates no regularization), we observe an increase in constraint density, which demonstrates the efficacy of IDVE in controlling sparsity. When we elevate \( \lambda \) from 0.4 to 0.55, IDVE shows a bias towards behaviors that maximize cumulative rewards (refer to our objective [13]). Consequently, the constraints that prevent agents from reaching the target states in the fewest steps diminish in significance. We find this phenomenon is most apparent when \( \alpha = 0 \), and it becomes less apparent as the density of the constraint increases (\( \alpha \) becomes larger). ### 6.2 Generalizability to New Environments We study how well the inferred constraints can be generalized to new environments with different starting and target states. In the experiment, the constraints are learned with data collected for training environments (Figure 2), and these constraints are evaluated under the new environment in Figure 5. Table 1 shows the results. The results show IDVE can outperform other baselines as it uniquely achieves a high success rate while maintaining a low cost. In contrast, IDVE \( w/o S \) does not achieve comparable performance, as over-dense constraints prohibit actions that have not been explored by the expert, yet may not necessarily be infeasible. The policy learned by offline RL also does not scale well, and we omit the offline IL as the distribution of rewards changes when the environment is modified, rendering the rewards learned by IL invalid. | Expert demonstration | Env | Offline RL | IDVE \( w/o S \) | IDVE | |----------------------|-----|------------|------------------|------| | | | Reward | Success Rate | Cost | Reward | Success Rate | Cost | Reward | Success Rate | Cost | | 10% expert | setting1 | -5.5 | 100% | 100% | \(-\infty\) | 0% | 0% | \(-\infty\) | 80% | 40% | | | setting2 | -5.0 | 100% | 100% | \(-\infty\) | 0% | 0% | \(-6.0\) | 100% | 0% | | | setting3 | -6.0 | 100% | 100% | \(-\infty\) | 60% | 0% | \(-6.6\) | 100% | 100% | | | setting4 | -5.0 | 100% | 100% | \(-\infty\) | 0% | 0% | \(-7.5\) | 100% | 60% | | 50% expert | setting1 | -5.5 | 100% | 100% | \(-\infty\) | 0% | 0% | \(-7.1\) | 100% | 0% | | | setting2 | -5.0 | 100% | 100% | \(-\infty\) | 0% | 0% | \(-6.0\) | 100% | 0% | | | setting3 | -6.0 | 100% | 100% | \(-\infty\) | 80% | 0% | \(-\infty\) | 80% | 100% | | | setting4 | -5.0 | 100% | 100% | \(-\infty\) | 0% | 0% | \(-7.1\) | 100% | 0% | | 100% expert | setting1 | -5.5 | 100% | 100% | \(-\infty\) | 0% | 0% | \(-7.5\) | 100% | 0% | | | setting2 | -5.0 | 100% | 100% | \(-\infty\) | 0% | 0% | \(-6.0\) | 100% | 0% | | | setting3 | -6.0 | 100% | 100% | \(-6.0\) | 100% | 0% | \(-6.0\) | 100% | 0% | | | setting4 | -5.0 | 100% | 100% | \(-\infty\) | 0% | 0% | \(-7.0\) | 100% | 0% | ### 6.3 Continuous Environment Our Continuous environments utilize MuJoCo [Todorov et al., 2012], a virtual simulator suited for robotic control tasks. To extend MuJoCo for constraint inference, we modify the MuJoCo environments by incorporating predefined constraints into each environment. We design constraints from... different perspectives for these agents: 1) We draw inspiration from two experiments conducted in Liu et al. (2023), where robots are forbidden from moving backward when it’s easier to move backward than to move forward (e.g., Half-Cheetah and Walker). We also create two other constraints inspired by real-world experiments. First, we impose a constraint on the agent’s maximum forward speed to simulate real-world speed limits. In the second environment, we enforce a constraint on the agent’s leg angles to prevent movement of its first leg. Table 3 summarizes the environment settings. The results and corresponding learning curve can be found in Table 2 and Figure 6. Across all environments, both IDVE and IDVE w/oS exhibit robust performance in imitating high-performing policies offline while maintaining the safety of the policy. This finding aligns with our expectations, as the sparsity regularizer is primarily tailored for enhancing generalizability (Section 4.1). Since MuJoCo uses identical training and testing environments, the benefits of encouraging sparsity are not readily apparent. When it comes to offline IL, it achieves a low cost when operated in offline mode, but it demonstrates relatively inferior reward-maximizing performance. Meanwhile, for offline RL, the associated costs rise substantially since it is not sensitive to the underlying constraint. Figure 7 in the appendix also visualizes the constraints of the Blocked Half-cheetah. The recovered rewards of offline IL show that its reward function assigns negative rewards to the unsafe states. Conversely, our method yields a sparse cost function centered around point 0, which effectively facilitates safe movements by solely discouraging backward movement from the start point. ![Figure 6: The cumulative rewards and the costs from the evaluation during training.](image) Table 2: MuJoCo testing performance. We report the average cumulative rewards and cumulative costs in 10 runs. The best average performance is highlighted in bold. | Method | Limited Speed Ant | Limit Arm HalfCheetah | Blocked Walker | Blocked Half-Cheetah | |--------------|-------------------|-----------------------|---------------|----------------------| | **Cumulative Rewards** | | | | | | Offline RL | 461.10±182.45 | 2,269.86±198.45 | 495.62±60.49 | 3,565.56±234.58 | | Offline IL | **1,284.85±77.07**| 850.74±374.66 | 440.13±90.62 | 726.65±61.68 | | IDVE w/oA | -108.67±53.34 | **3,099.46±670.42** | 458.47±79.86 | **4,297.64±571.42** | | IDVE w/oS | 1,043.50±12.07 | 1,405.13±133.72 | 419.28±94.90 | 828.01±78.04 | | IDVE | 1,061.11±21.90 | 2,433.99±445.43 | **483.97±38.21**| 901.62±74.09 | | **Cumulative Costs** | | | | | | Offline RL | 330.24±79.84 | 330.24±79.84 | 115.32±42.67 | 902.10±14.19 | | Offline IL | 6.06±2.97 | **133.56±48.47** | 70.04±39.16 | 10.30±20.60 | | IDVE w/oA | 52.36±24.02 | 905.60±38.22 | 60.76±24.84 | 917.04±28.89 | | IDVE w/oS | **4.40±0.41** | 228.97±69.93 | **51.38±11.31**| **0.00±0.00** | | IDVE | 9.48±3.75 | 421.50±157.57 | 85.26±36.10 | 5.04±10.08 | 7 Conclusion In this paper, we present an IDVE framework as the first attempt to facilitate offline ICRL. This is achieved by deriving dual value functions from regularized policy learning and formulating a bi-level optimization problem to update these value functions. To enhance practical applicability, we introduce a IDVE algorithm that effectively addresses unknown transitions, continuous environments, and insufficient sparsity. Empirical results demonstrate the performance of IDVE in various settings. A promising avenue for future research involves extending our method to accommodate diverse ICRL configurations, such as soft constraints in stochastic environments. REFERENCES Mattijs Baert, Pietro Mazzaglia, Sam Leroux, and Pieter Simoens. Maximum causal entropy inverse constrained reinforcement learning. *arXiv preprint arXiv:2305.02857*, 2023. Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. *arXiv preprint arXiv:2004.07219*, 2020. Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In *International Conference on Machine Learning, ICML*, volume 97, pp. 2052–2062, 2019. Divyansh Garg, Shuvam Chakraborty, Chris Cundy, Jiaming Song, and Stefano Ermon. Iq-learn: Inverse soft-q learning for imitation. In *Neural Information Processing Systems (Neurips)*, pp. 4028–4039, 2021. Ashish Gaurav, Kasra Rezaee, Guiliang Liu, and Pascal Poupart. Learning soft constraints from constrained expert demonstrations. In *International Conference on Learning Representations (ICLR)*, 2023. Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In *Advances in Neural Information Processing Systems*, pp. 12498–12509, 2019. Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. Morel: Model-based offline reinforcement learning. In *Advances in Neural Information Processing Systems*, 2020. Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy q-learning via bootstrapping error reduction. In *Advances in Neural Information Processing Systems*, pp. 11761–11771, 2019. Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. In *Advances in Neural Information Processing Systems*, 2020. Jongmin Lee, Wonseok Jeon, Byungjun Lee, Joelle Pineau, and Kee-Eung Kim. Optidice: Offline policy optimization via stationary distribution correction estimation. In *International Conference on Machine Learning*, pp. 6120–6130. PMLR, 2021. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. *CoRR*, abs/2005.01643, 2020. Guiliang Liu, Yudong Luo, Ashish Gaurav, Kasra Rezaee, and Pascal Poupart. Benchmarking constraint inference in inverse reinforcement learning. In *International Conference on Learning Representations (ICLR)*, 2023. Shicheng Liu and Minghui Zhu. Distributed inverse constrained reinforcement learning for multi-agent systems. In *Neural Information Processing Systems (Neurips)*, 2022. Yongshuai Liu, Avishai Halev, and Xin Liu. Policy learning with constraints in model-free reinforcement learning: A survey. In *International Joint Conference on Artificial Intelligence (IJCAI)*, pp. 4508–4515, 2021. Shehryar Malik, Usman Anwar, Alireza Aghasi, and Ali Ahmed. Inverse constrained reinforcement learning. In *International Conference on Machine Learning (ICML)*, pp. 7390–7399, 2021. Nikolay Malkin, Moksh Jain, Emmanuel Bengio, Chen Sun, and Yoshua Bengio. Trajectory balance: Improved credit assignment in gflownets. In *Neural Information Processing Systems (Neurips)*, 2022. David Livingston McPherson, Kaylene C. Stocking, and S. Shankar Sastry. Maximum likelihood constraint inference from stochastic demonstrations. In *IEEE Conference on Control Technology and Applications, (CCTA)*, pp. 1208–1213, 2021.
WEoyWdsI9f
Why does the accuracy on the private dataset express the success rate of the model stealing attack? What if the attack mode is very generalized which achieves high accuracy in the same data distribution?
Quantifying and Defending Against the Privacy Risk in Logit-based Federated Learning Anonymous authors Paper under double-blind review Abstract Federated learning (FL) aims to protect data privacy by collaboratively learning a model without sharing private data among clients. Novel logit-based FL methods share model outputs (i.e., logits) on public data instead of model weights or gradients during training to enable model heterogeneity, reduce communication overhead and preserve clients’ privacy. However, the privacy risk of these logit-based methods is largely overlooked. To the best of our knowledge, this research is the first theoretical and empirical analysis of a hidden privacy risk in logit-based FL methods – the risk that the semi-honest server (adversary) may learn clients’ private models from logits. To quantify the impacts of the privacy risk, we develop an effective attack named Adaptive Model Stealing Attack (AdaMSA) by leveraging historical logits during training. Additionally, we provide a theoretical analysis on the bound of this privacy risk. We then propose a simple but effective defense strategy that perturbs the transmitted logits in the direction that minimizes the privacy risk while maximally preserving the training performance. The experimental results validate our analysis and demonstrate the effectiveness of the proposed attack and defense strategy. 1 Introduction In recent years data privacy regulations such as General Data Protection Regulation (GDPR) have largely restricted the collection of annotated data on individuals for centralized training. Federated Learning (FL) (McMahan et al., 2017) provides a promising approach that allows different clients to collaboratively train their models by sharing local model parameters or gradients without exchanging their respective raw data. However, recent studies (Zhu et al., 2019; Geiping et al., 2020) reveal that private training data can be derived from the shared gradients or parameters, which poses serious privacy leakage issue. Another line of FL studies (Chang et al., 2019; Gong et al., 2021, 2022; Jeong et al., 2018; Li & Wang, 2019) adopt knowledge distillation (Hinton et al., 2015) to exchange model outputs (i.e., logits) instead of model weights or gradients during training to reduce communication overhead and enable model heterogeneity. To preserve clients’ privacy, these logit-based FL methods distill on public data to transfer knowledge and model parameters are stored locally during training, as depicted in Figure 1. Moreover, such public data can be unlabeled and insensitive that is sampled from other domains (Gong et al., 2022). A natural question arises: Is the logit-sharing scheme safe to protect the privacy of each participant? Unfortunately, we find that the transmitted logits can still pose the privacy leakage risk, e.g., the adversary may learn clients’ private models by leveraging its predictions on public data among the training iterations. Such leakage poses intellectual property issues and can serve as a stepping stone for further attacks, such as membership inference attacks (Nasr et al., 2019) and data reconstruction attacks (Geiping et al., 2020; Zhu et al., 2019). To the best of our knowledge, we are the first to provide a theoretical and empirical analysis of a hidden privacy risk in logit-based FL that the semi-honest server intends to infer clients’ private models without knowing local model architecture or data distribution. To quantify the impacts of the privacy risk, we develop an effective attack named Adaptive Model Stealing Attack (AdaMSA), which adaptively steals the private model by approximating its intermediate training states in previous iterations. Specifically, in each iteration, the semi-honest server forces the attacking model to approximate the current state of the victim model by minimizing the distance between the output of the attacking model and a target logit (i.e., attacking logit). Inspired by ensemble learning (Mienye & Sun [2022]), we propose to combine the observed historical logits of the victim model via an importance weight to obtain a more informative attacking logit. Additionally, we provide a theoretical analysis on the bound of this privacy risk in logit-based FL. Moreover, we propose a simple but effective perturbation-based defense strategy to prevent this privacy leakage in logit-based FL. The key idea of our strategy is to perturb the logit in the direction that maximally thwarts the adversary while minimally reducing the model performance loss. As a result, our defense achieves a better trade-off compared to prior art. We empirically evaluate our proposed attack and defense in three experimental settings, Close-world, Open-world-CF and Open-world-TI (see Section 5.1 for details). The experimental results show that AdaMSA is effective and our defense can offer a better utility and privacy trade-off than the state-of-the-art baselines. We believe that our research can shed light on the hidden privacy risk of logit-based FL and pave the way toward privacy-preserving FL methods. Our key contributions are summarized as follows: - To the best of our knowledge, we provide the first theoretical and empirical analysis of a hidden privacy risk in logit-based FL that the semi-honest server can infer clients’ private models according to logits. - To quantify the privacy risk, we develop an effective model stealing attack named AdaMSA, which steals private models by leveraging historical logits during training. Moreover, we provide a theoretical bound for the privacy risk. - To prevent the privacy risk, we develop a simple but effective defense by perturbing the transmitted logits in the direction that minimizes the privacy risk while maximally preserving the training performance. - We empirically evaluate our designed attack and defense in three experimental settings, Close-world, Open-world-CF and Open-world-TI. The results validate our analysis, show that AdaMSA can achieve up to 3.69% improvement and our defense can achieve a better utility and privacy trade-off compared to the state-of-the-arts. 2 RELATED WORK 2.1 PRIVACY RISK IN FEDERATED LEARNING Federated learning (Kairouz et al. [2019]) allows multiple clients to collaboratively train a global model while keeping training data locally. Typical FL algorithms (McMahan et al. [2017], Karimireddy et al. [2020]) are parameter-based FL that shares local model parameters or gradients and... aggregate local models in the server. Logit-based FL (Chang et al., 2019; Gong et al., 2021; 2022; Jeong et al., 2018; Li & Wang, 2019) adopt knowledge distillation (Hinton et al., 2015) to transmit model outputs (i.e., logits) instead of model weights or gradients during training to reduce communication overhead, enable model to be heterogeneous and preserve clients’ privacy. Previous studies have thoroughly analyzed the privacy risks of sharing model parameters or gradients in FL, including class representatives leakage (Wang et al., 2019), membership leakage (Nasr et al., 2019), property leakage (Melis et al., 2019) and training input leakage (Geiping et al., 2020; Zhu et al., 2019). However, these efforts are all white-box attacks (detailed comparisons are summarized in Appendix A). That is, they have strong assumptions that the adversary knows the local model architecture and detailed training information such as gradients. In this work, we focus on logit-based FL that model parameters or gradients are stored in clients’ local machines. To our knowledge, this is the first study analyzing the privacy risk in logit-based FL. ### 2.2 Model Stealing Attack Model stealing attacks (Orekondy et al., 2019; Papernot et al., 2017; Tramèr et al., 2016) have demonstrated the ability to steal a deployed machine learning model in a black-box manner through limited query access and carefully calibrated proxy dataset. These attacks happen in the inference stage and aim to reduce the number of queries or eliminate the need of proxy dataset. However, in logit-based FL, the attack happens in the training stage, where the adversary can neither arbitrarily select the query dataset nor access to the private models or private dataset distribution. Instead, the adversary only observes the intermediate information (i.e., transmitted logits) from the victim during training. Based on this observation, we propose AdaMSA that leverages historical training information to obtain more informative attacking logits and therefore improve the attack performance. ### 2.3 Privacy Protection Strategy in Logit-based FL Researchers have proposed some strategies (Li & Wang, 2019; Gong et al., 2022; Sattler et al., 2021) to prevent the potential privacy leakage in logit-based FL. Specifically, Li et al. (Li & Wang, 2019) proposed to distill on a public dataset instead of private data to transfer predicted vectors. Gong et al. (Gong et al., 2022) further relaxed the public data to be unlabeled and insensitive data sampled from other domains to preserve data privacy. Moreover, Sattler et al. (Sattler et al., 2021) and Gong et al. (Gong et al., 2022) adopted differential privacy (DP) to protect the transmitted logits. However, these paper fails to quantify the privacy risk inside logit-based FL and their defense strategies incur a significant loss in accuracy. In contrast, we first identify and quantify the privacy risk. Then we design a simple but effective perturbation strategy against our revealed privacy risk, which perturbs the logit in the direction that maximally misleads the adversary while minimally persevering training performance. Therefore, it can achieve a better utility and privacy trade-off. ### 3 Quantifying the Privacy Risk In this section, we first give the problem setup and threat model. Then we propose an attack to quantify the privacy risk in logit-based FL and elaborate our proposed attack in details. Lastly, we give a theoretical analysis on the bound of this privacy risk. #### 3.1 Problem Setup and Threat Model **Problem Setup:** As shown in Figure 1, there are \( n \) clients and a central server. Each client has a private labeled dataset \( \{D_i\}_{i=1}^n \) and some unlabeled public data \( D_{pub} \). The server coordinates the training process, aggregating the client’s submitted locally predicted logits \( \{p_i\}_{i=1}^n \) on public data to obtain an ensemble logit \( p_e \) and distribute it back to clients. Then clients train their local models \( \{\theta_i\}_{i=1}^n \) under supervision of labels on \( \{D_i\}_{i=1}^n \) and the ensemble logit on \( D_{pub} \). **Threat Model:** We assume that the server (i.e., adversary) is semi-honest, i.e. it completes the learning task as required but is curious about the clients’ local model, and all clients are honest. The adversary does not know the private data distribution and the victim model, including its parameters, hyperparameters or architecture. Moreover, the adversary does not have the right to select public data. The adversary only knows that 1) the unlabeled public dataset \( D_{pub} \); 2) the transmitted victim’s logits \( \{p_t\}_{t=1}^T \) on the public data during training. The adversary’s goal is to steal the functionality of the victim’s model \( f(x, \theta) \) by training an attacking model \( \tilde{f}(x, \tilde{\theta}) \) that achieves high classification accuracy on the victim \( k \)'s private dataset \( D_k \): \[ \max_{\tilde{\theta}} \mathbb{E}_{x \sim D_k} \text{Acc}(\tilde{f}(x, \tilde{\theta})). \] In the following discussion, we assume that the adversary is interested in client \( k \)'s model and denote it as \( \theta \) for simplicity. | Threat Model | Adversary | Attack Target | Adversary’s Knowledge | |--------------|-----------|---------------|-----------------------| | Semi-honest | Server | Private model \( \theta \) | Logits of \( \theta \) on \( D_{pub} \) during training and \( D_{pub} \) | Table 1: Threat model. ### 3.2 Adaptive Model Stealing Attack To quantify the privacy risk in logit-based FL, we develop an Adaptive Model Stealing Attack named AdaMSA, which adaptively steals the private model by approximating its intermediate training states in previous iterations. Specifically, in each iteration, the server forces the attacking model to approximate the victim model by imitating a target logit (i.e., attacking logit) on public data. Since a more informative attacking logit will provide better supervision for the attacking model, the key issue here is how to design the attacking logit. Given the historical logits during training, we design the attacking logit based on two considerations: 1) historical predictions may contain valuable information and can provide different views of data to improve generalization ability of student model (Allen-Zhu & Li, 2020), 2) predictions in the early rounds may not be well-trained to provide informative supervision. Accordingly, we define the attacking logit \( \hat{p}_T \) as \[ \hat{p}_T = \sum_{t=T-T_0}^{T} w_t \cdot p_t, \] where \( w_t \) is an importance weight that represents how much attention we give to the past predictions and \( T_0 \) is a threshold that controls how far the past predictions we need to consider. Considering that the victim model continuously evolves during the training process, we should give more attention to the prediction closer to the current iteration. Consequently, we set \( w_t \) increase with iteration number \( t \): \[ w_t = w_0 \cdot \frac{t}{T}, \] where \( w_0 \) is a normalization parameter. To train the attacking model \( \tilde{\theta} \), we minimize an empirical loss \( L_{emp} \) computed between the attacking logit \( \hat{p}_T \) and its prediction \( \tilde{p}_T \) on \( D_{pub} \), which can be formulated as: \[ \min_{\tilde{\theta}} \mathbb{E}_{x \sim D_{pub}} [L_{emp}(\hat{p}_T, \tilde{p}_T)]. \] We provide the algorithm of our proposed attack in one training iteration in Appendix B.1. By repeating this process, the semi-honest server is able to steal any interested intermediate private model during training as well as the final well-trained private model of the victim. The obtained attacking model with higher accuracy indicates that more privacy of the victim has been leaked. ### 3.3 Analysis of the Privacy Risk Logit-based FL methods transfer knowledge through the transmitted logits on public data during training. As shown in Figure 2(b-c), we identify that the correlation (i.e., distance) between the private dataset and the public dataset plays a crucial role in determining the performance of logit-based FL methods. To better understand the inherent cause of privacy risk in this logit-sharing scheme, we start with quantifying the distance between the private dataset and the public dataset. Consider a simple case that we have \( n \) private datasets sampled from same distribution \( D_{\text{priv}} \) and an unlabeled public dataset from another domain sampled from independent distribution \( D_{\text{pub}} \). We construct several mixed datasets \( D_{\text{mix}} \) as the public dataset with the varying distance between private and mixed datasets controlled by a weighting parameter \( \alpha \). As shown in Figure 2(a), the mixed dataset is constructed through \( D_{\text{mix}} = (S_1, S_2) \), where \( S_1 \) consists of \( \alpha |D_{\text{mix}}| \) instances sampled independently from \( D_{\text{priv}} \) and \( S_2 \) consists of \( (1 - \alpha)|D_{\text{mix}}| \) instances sampled independently from \( D_{\text{pub}} \). Through varying \( \alpha \), we can thereby control the distance between private and mixed datasets, e.g., as \( \alpha \) tends to 1, the mixed dataset approaches to the private dataset and vice versa. As illustrated previously in Section 3.2, the privacy risk is measured by the performance of AdaMSA on the victim’s private test dataset. In consequence, we can derive the bound of the privacy risk via the performance bound of AdaMSA on the victim’s private test dataset based on the prior art in domain adaptation (Blitzer et al., 2007). Denote the empirical risk of model \( \theta \) on the mixed dataset as \[ \epsilon_{D_{\text{mix}}}(\theta, f_p) = \mathbb{E}_{x \sim D_{\text{mix}}} [\|\theta(x) - f_p(x)\|], \] which measures the probability according the distribution \( D \) that \( \theta \) disagrees with the ground truth label \( f_p \). For simplicity, we abbreviate \( D_{\text{mix}}(\theta, f_p) \) as \( D_{\text{mix}}(\theta) \). Similarly, \( \epsilon_{D_{\text{priv}}}(\theta) \) and \( \epsilon_{D_{\text{pub}}}(\theta) \) denote the empirical risk of \( \theta \) with respect to \( D_{\text{priv}} \) and \( D_{\text{pub}} \). **Theorem 1.** Given a mixed dataset \( D_{\text{mix}} = (S_1, S_2) \), where \( S_1 \) consists of \( \alpha |D_{\text{mix}}| \) instances sampled independently from \( D_{\text{priv}} \), \( S_2 \) consists of \( (1 - \alpha)|D_{\text{mix}}| \) instances sampled independently from \( D_{\text{pub}} \), its empirical risk can be written as \[ \epsilon_{D_{\text{mix}}}(\theta) = \alpha \epsilon_{D_{\text{priv}}}(\theta) + (1 - \alpha) \epsilon_{D_{\text{pub}}}(\theta). \] The proof of Theorem 1 is given in Appendix D.1. Then we give some definitions and derive the bound of the difference between the empirical risk of \( \theta \) on the mixed dataset \( D_{\text{mix}} \) and the private dataset \( D_{\text{priv}} \) for our analysis. **Definition 1.** Given a domain \( X \) with \( D \) and \( D' \) probability distributions over \( X \), let \( H \) be a hypothesis class on \( X \) and \( A_H \) be the set of subsets of \( X \) that supports the hypothesis in \( H \). The \( H \)-divergence between \( D \) and \( D' \) is defined as: \[ d_H(D, D') = \sup_{A \in A_H} |Pr_D(A) - Pr_{D'}(A)|. \] **Definition 2.** For a hypothesis space \( H \), the symmetric difference hypothesis space \( H \Delta H \) is defined as \( H \Delta H = \{ h(x) \oplus h'(x) | h, h' \in H \} \), where \( \oplus \) represents the XOR operation. **Theorem 2** (Blitzer et al., 2007). Let \( h \) be a hypothesis in class \( H \). Then we have \[ |\epsilon_{D_{\text{mix}}}(h) - \epsilon_{D_{\text{priv}}}(h)| \leq (1 - \alpha)(\frac{1}{2} d_{H \Delta H}(D_{\text{priv}}, D_{\text{pub}}) + \lambda), \] where \( \lambda = \epsilon_{D_{\text{priv}}}(h^*) + \epsilon_{D_{\text{pub}}}(h^*) \) and \( h^* \) is the ideal joint hypothesis minimizing the combined empirical risk: \( h^* = \arg\min_{h \in H} \epsilon_{D_{\text{priv}}}(h) + \epsilon_{D_{\text{pub}}}(h) \). The proof of Theorem 2 is given in Appendix D.2. In our case, the attacking model is trained on mixed dataset and test on victim’s private dataset. According to Theorem 2, we obtain that the bound of the privacy risk, measured by the performance of the attacking model $\tilde{\theta}$, is bounded by $(1 - \alpha)(\frac{1}{2}d_{H\Delta H}(D_{priv}, D_{pub}) + \lambda)$. When fixing $D_{priv}$ and $D_{pub}$, this bound is only related to weighting parameter $\alpha$. This bound drops to 0 as $\alpha$ increases to 1. This indicates that, when $\alpha$ increases, i.e. the mixed public data gets closer to the private data, the attacking model performs better on the private test dataset and more private information of the model, which is trained on its logits and the public dataset, is leaked. Here we briefly discuss the inherent cause of this privacy risk in logit-based FL. From Figure 2(b-c), it is observed that local model training is benefited from the knowledge contained in the ensemble logit which is obtained through the ensemble of local predicted logits on the public data. Although a more informative local logit results in a more informative ensemble logit, it also exposes more privacy to the adversary as illustrated in Theorem 2. Our observation is also consistent with our empirical result as shown in Figure 3 in Section 5.2. 4 DEFENSE DESIGN Our observation in Section 3 shows that the privacy risk in logit-based FL mainly comes from the logit. In this section, we propose a defense strategy that perturbs the transmitted logits of local models to defend against this privacy risk. **Defense Objective** The defender (i.e. the clients) has two objectives. First, the defender aims to prevent an adversary from being able to replicate the functionality of its private model: $$\min_{\tilde{\theta}} \mathbb{E}_{x \sim D_{priv}} \text{Acc}(\tilde{f}(x, \tilde{\theta})), \quad (2)$$ where $\tilde{f}(x, \tilde{\theta})$ denotes the functionality of the attacking model. Second, the defender aims to preserve the training performance of the logit-based FL protocol so that the perturbation scale should be bounded by a non-negative constant $\gamma$: $$||p - p'||_b \leq \gamma, \quad (3)$$ where $p$ is the original logit on the public data, $p'$ is the corresponding perturbed logit, $\gamma$ is a predetermined constant parameter and $||.||_b$ denotes the $L_b$ norm. We note that the defender has no access to the adversary’s model and may be even unaware that it is under attack since the attack happens at the server side. Therefore, the defender has to prevent the privacy risk from the semi-honest server during the whole training process. **Defense Problem** Combining Equation (2) and (3), we can formulate a defense problem for the defender. However, this problem cannot be directly solved since the attacking model parameters and its training details are unknown to the defender. Therefore, we need to approximate the first objective from the perspective of the defender. The first step is to estimate the attacking model $\tilde{\theta}$. As the goal of $\tilde{\theta}$ is to approximate the defender’s model $\theta$ in each training iteration, we estimate the attacking model obtained from the last iteration to be the same as the defender’s model $\theta$ in the current iteration $T$: $$\tilde{\theta}'_{T-1} = \theta_{T-1},$$ where $\tilde{\theta}'_{T-1}$ and $\theta_{T-1}$ are the estimated attacking model and the defender’s model in the last training iteration respectively. Without loss of generality, we assume that the adversary optimizes its attacking model through the gradient of an empirical loss on the public data, which is the most widely used optimization method in deep learning (Oliynyk, 2020). The gradient of an empirical loss with respect to parameter $\theta$ can be expressed as $$G(\theta, p) = \nabla_\theta L_{emp}(f(x, \theta), p). \quad (4)$$ Based on the above assumptions, we restate the objectives of the defender as maximally changing the updated gradients of the estimated attacking model with minimum perturbation on the logit. That is, we can rewrite the first objective of the defender in iteration $T$ as maximizing the distance between the gradients of the estimated attacking model updated through the original logit and the perturbed logit, in terms of $L_a$ norm: $$\max_{p'} ||G(\hat{\theta}_{T-1}, p') - G(\hat{\theta}_{T-1}, p)||_a = \max_{p'} ||G(\theta_{T-1}, p') - G(\theta_{T-1}, p)||_a$$ where $G(\theta, p)$ is the gradient of the empirical loss with respect to the parameters $\theta$. Combining Equation (5) and (3), we therefore reformulate the defense problem as a constrained optimization problem: $$\max_{p'} ||G(\theta_{T-1}, p') - G(\theta_{T-1}, p)||_a$$ subject to $$||p - p'||_b \leq \gamma,$$ which allows the defender to trade off the utility and privacy in logit-based FL training. Worth mentioning that we can form multiple defense problems and corresponding defense strategies with different selections of $(a, b)$. In this paper, we set $(a, b) = (2, 1)$ and leave the other options as future work. **Defense solution** Deep learning models usually involve millions of parameters and thus solving Equation (6) s.t. Equation (7) with respect to each sample in the public dataset requires a large computational cost for clients, which is unaffordable for local devices in practice. Here, we give a simple heuristic solver to circumvent this computational issue. We perturb the logit $p$ on the public dataset in each dimension of itself by $Z$: $$p' = p + Z \cdot e_j,$$ where $e_j$ denotes a one-hot vector with 1 in the $j$-th dimension of $p$ and 0’s elsewhere. Then we select the one giving the largest perturbation in Equation (6). The local training with our defense in one iteration is given in Appendix B.2. We empirically show that this simple solution is effective in Section 5.3. **Noise Selection and Privacy Guarantee** The added noise $Z$ has multiple choices, such as Laplace or Gaussian noise (Abadi et al., 2016). Following the prior works (Dwork et al., 2014), we adopt $Z$ as a Gaussian noise $Z \sim N(0, \sigma^2)$, where $\sigma = \sqrt{\gamma}$ is a variance parameter. We show that our proposed perturbation based defense strategy in Algorithm 2 preserves $(\epsilon, \delta) - DP$ in Appendix C. ## 5 EXPERIMENT In this section, we conduct experiments to answer two research questions: **RQ1**: Is the proposed AdaMSA effective against logit-based FL? **RQ2**: Is the proposed defense effective against AdaMSA, i.e. can it achieve a better utility and privacy trade-off? ### 5.1 EXPERIMENTAL SETUP **Experimental Settings** To evaluate our proposed attack and defense strategy, we construct three experimental settings: Close-world, Open-world-CF and Open-world-TI for the image classification task. For Close-world, we utilize MNIST (LeCun et al., 1998) without labels as the unlabeled public dataset and EMNIST (Cohen et al., 2017) as the private dataset. For Open-world-CF, we utilize CIFAR10 (Krizhevsky et al., 2009) as the unlabeled public dataset and SVHN (Netzer et al., 2011) as the private dataset. For Open-world-TI, we adopt TinyImagenet (Le & Yang, 2015) as the unlabeled public dataset and SVHN (Netzer et al., 2011) as the private dataset. The details of experimental settings are given in Appendix E.1. **Implementation** We adopt CNN as backbones for all clients’ models. For the image classification problem, we choose the most commonly used cross-entropy loss as the empirical loss for both local and attacking models. We use a 2-layer CNN with (128,256) parameters as the attacking model and a CNN with same structure as the victim model to report the main result in Table 2. We provide details of hyperparameter choices in Appendix E.3. We repeat each experiment three times with random initialization and report the average accuracy. **Baselines** We modify the conventional MSA method (Tramèr et al., 2016) in our setting as the attack baseline. For MSA baseline, we train an attacking model by approximating the victim’s current logit in each round and report the highest accuracy of the attacking model during training. | Setting | Defense | Victim Acc(%) | MSA (Tramèr et al., 2016) Acc(%) | AdaMSA Acc(%) | |------------------|--------------------------|---------------|---------------------------------|---------------| | Close-world | Unprotected (Li & Wang, 2019) | 83.43 ± 1.07 | 80.54 ± 1.31 | 83.79 ± 1.20 | | | Cross-domain (Lin et al., 2020) | 82.97 ± 0.89 | 79.43 ± 1.26 | 82.68 ± 1.01 | | | One-shot (Li et al., 2020) | 68.23 ± 1.21 | 65.03 ± 0.89 | 67.18 ± 1.17 | | | DP-G (Sattler et al., 2021) | 74.68 ± 2.15 | 71.69 ± 1.14 | 74.79 ± 1.20 | | | DP-L (Gong et al., 2022) | 75.65 ± 1.97 | 72.01 ± 1.21 | 75.53 ± 1.09 | | Open-world-CF | Unprotected (Li & Wang, 2019) | 71.11 ± 0.98 | 68.77 ± 1.13 | 71.25 ± 0.71 | | | Cross-domain (Lin et al., 2020) | 53.57 ± 1.13 | 52.21 ± 1.06 | 53.78 ± 0.79 | | | One-shot (Gong et al., 2021) | 63.41 ± 1.09 | 60.03 ± 0.98 | 62.49 ± 0.91 | | | DP-G (Sattler et al., 2021) | 65.89 ± 1.77 | 62.20 ± 1.39 | 65.61 ± 1.31 | | | DP-L (Gong et al., 2022) | 65.22 ± 1.96 | 65.97 ± 1.28 | 67.64 ± 1.22 | | Open-world-TI | Unprotected (Li & Wang, 2019) | 71.11 ± 0.98 | 69.34 ± 0.97 | 72.13 ± 1.02 | | | Cross-domain (Lin et al., 2020) | 51.14 ± 1.10 | 51.17 ± 1.05 | 53.54 ± 0.80 | | | One-shot (Gong et al., 2021) | 63.09 ± 1.14 | 62.36 ± 1.03 | 64.41 ± 0.99 | | | DP-G (Sattler et al., 2021) | 65.76 ± 2.01 | 62.77 ± 1.20 | 65.02 ± 1.18 | | | DP-L (Gong et al., 2022) | 65.09 ± 1.98 | 66.11 ± 1.13 | 68.41 ± 1.04 | Table 2: Attack performance of our proposed AdaMSA and MSA baseline (Tramèr et al., 2016) on the victim model with various defense baselines in Close-world, Open-world-CF and Open-world-TI settings. Victim Acc denotes the performance of the victim model. We compare our proposed defense strategy to several state-of-the-art baseline defenses: 1) Unprotected (Li & Wang, 2019); 2) Cross-domain (Lin et al., 2020); 3) One-shot (Li et al., 2020); 4) Differential Privacy with Gaussian (DP-G) (Sattler et al., 2021); 5) Differential Privacy with Laplacian (DP-L) (Gong et al., 2022). The detailed explanation of the baselines is in Appendix E.2. **Evaluation Metrics** For attack evaluation, we use the prediction accuracy of attacking model on the victim’s private test dataset to measure the attack performance. For defense evaluation, we evaluate all defenses on a utility loss vs. privacy loss curve at various points of the defenses. The utility loss $\Delta U$ of the defense is defined as $\Delta U = U_d - U_0$, where $U_0$ is the accuracy of local model in unprotected baseline and $U_d$ is the accuracy of local model under different defense. The privacy loss is defined as the prediction accuracy of the attacking model on the defender’s private test dataset. ### 5.2 Attack Performance Evaluation **Main Results:** Table 2 shows the performance of our proposed AdaMSA and MSA baseline in three settings. We observe that AdaMSA achieves similar performance compared to the victim models in most of the defense baselines, indicating that AdaMSA can successfully steal the functionality of the victim’s model. We also observe that attacking models of AdaMSA achieves up to 3.69% improvement in Close-world setting, 3.41% improvement in Open-world-CF setting and 2.79% in Open-world-TI setting compared to the baseline, indicating our attack design that combines historical logits generates a more informative attacking logit and thereby improve the attack performance. ![Figure 3](image-url) The relation between the utility loss/privacy loss and $\alpha$ in Open-world-CF (left) and Open-world-TI (right) settings. The blue line denotes the utility loss-$\alpha$ curve and the red line denotes the privacy loss-$\alpha$ curve. **Effect of Distance Between Private and Public Datasets** As mentioned previously in Section 3.3, distance between private and public datasets plays a crucial role in determining utility and privacy risk in logit-based FL. To evaluate the effect of this factor, we construct several mixed datasets in Open-world-CF and Open-world-TI setting as the public dataset through varying the weighting parameter $\alpha$. The results are reported in Figure 3. It can be observed that increasing the value of $\alpha$, i.e. decreasing the distance between the public and the private datasets, will increase the utility at the cost of increasing privacy loss. This result indicates that the utility and privacy in logit-based FL are indeed two sides of a coin. That is, a more informative local logit results in a more informative ensemble logit to supervise the local model training, meanwhile it also exposes more privacy to the adversary. This result is consistent with our analysis in Section 3.3. **Effect of Historical Logits** In order to evaluate the effect of our attack design which combines historical logits to generate a more informative attacking logit, we vary $T_0$ and test on the same victim model in two open-world settings. Note that, when $T_0 = 0$, it is equal to the situation that we perform MSA (Tramer et al., 2016) baseline. From the results in Table 3, we find that, when $T_0$ increases from 0 to 4, the attack performance gradually increases by 2.59% in Open-world-CF setting and 3.10% in Open-world-TI setting, demonstrating that combining more historical logits indeed improves the attack performance. This is because the historical predictions close to the current round can be viewed as the different views of the victim model on the public data. Therefore, our designed attacking logit can be benefited from the ensemble of these multi-view predictions, which improves model generalization ability on the test data (Mienye & Sun, 2022; Allen-Zhu & Li, 2020). | $T_0$ | Attack Acc(%) | |-------|---------------| | | Open-world-CF | Open-world-TI | | 0 | 68.77 ± 1.13 | 69.34 ± 0.97 | | 1 | 70.01 ± 0.73 | 70.99 ± 1.17 | | 2 | 70.47 ± 1.03 | 71.87 ± 1.30 | | 3 | 71.25 ± 0.71 | 72.13 ± 1.02 | | 4 | 71.36 ± 0.88 | 72.44 ± 0.89 | Table 3: The effect of combining historical logits in the attacking logits on the attack performance of AdaMSA. ### 5.3 Defense Performance Evaluation ![Figure 4](image) (a) Close-world (b) Open-world-CF (c) Open-world-TI Figure 4: Defense performance evaluation in Close-world, Open-world-CF and Open-world-TI settings. The ideal trade-off curve resides on the bottom left corner in the figure. The results of the state-of-the-art defense strategies in logit-based FL and our defense on three settings are reported in Figure 4. The X-axis represents the privacy loss, i.e. the capability for adversarial to infer a client’s private model. Y-axis represents the utility loss brought by the defense methods. Comparing to the state-of-the-art baselines, our proposed defense is closest to the ideal trade-off, which should reside in the bottom left corner in Figure 4. For example, when privacy loss is 0.7, the utility loss of our defense is around 8% less than DP-G and DP-L in Close-world setting. This result indicates that our defense can provide a better utility and privacy trade-off compared to the state-of-the-art defense baselines. ### 6 Conclusion In this paper, we provide the first theoretical and empirical analysis of a hidden privacy risk in logit-based FL that the semi-honest server can infer clients’ private models according to logits. To quantify the impacts of the privacy risk, we develop an effective attack named Adaptive Model Stealing Attack (AdaMSA) by leveraging historical logits during training and provide a theoretical analysis on the bound of the privacy risk. Moreover, we propose a perturbation-based defense that perturbs the transmitted logit in the direction that minimizes the privacy risk while maximally preserving the training performance. The empirical results on three experimental settings demonstrate the effectiveness of our proposed attack and defense. REFERENCES Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308–318, 2016. Zeyuan Allen-Zhu and Yuanzhi Li. Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. arXiv preprint arXiv:2012.09816, 2020. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19, 2006. John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman. Learning bounds for domain adaptation. Advances in neural information processing systems, 20, 2007. Hongyan Chang, Virat Shejwalkar, Reza Shokri, and Amir Houmansadr. Cronus: Robust and heterogeneous collaborative learning with black-box knowledge transfer. arXiv preprint arXiv:1912.11279, 2019. Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. Emnist: Extending mnist to handwritten letters. In 2017 international joint conference on neural networks (IJCNN), pp. 2921–2926. IEEE, 2017. Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from multiple sources. Journal of Machine Learning Research, 9(8), 2008. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, pp. 265–284. Springer, 2006. Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014. Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. Inverting gradients—how easy is it to break privacy in federated learning? Advances in neural information processing systems, 33:16937–16947, 2020. Xuan Gong, Abhishek Sharma, Srikrishna Karanam, Ziyi Wu, Terrence Chen, David Doermann, and Arun Innanje. Ensemble attention distillation for privacy-preserving federated learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15076–15086, 2021. Xuan Gong, Abhishek Sharma, Srikrishna Karanam, Ziyi Wu, Terrence Chen, David Doermann, and Arun Innanje. Preserving privacy in federated learning with ensemble cross-domain knowledge distillation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 11891–11899, 2022. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Eunjeong Jeong, Seungeun Oh, Hyesung Kim, Jihong Park, Mehdi Bennis, and Seong-Lyun Kim. Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data. arXiv preprint arXiv:1811.11479, 2018. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, et al. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977, 2019. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. SCAFFOLD: Stochastic controlled averaging for federated learning. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 5132–5143. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/karimireddy20a.html.
6xfe4IVcOu
Do you have insights on how this method would scale from pairwise comparisons to N-way rankings? I thought that WebGPT had >2 responses per prompt (but might be mistaken) -- if so, it would be great to understand how this was handled.
Chain of Hindsight aligns Language Models with Feedback Hao Liu UC Berkeley hao.liu@berkeley.edu Carmelo Sferrazza UC Berkeley csferrazza@berkeley.edu Pieter Abbeel UC Berkeley pabbeel@cs.berkeley.edu Abstract Learning from human preferences is important for language models to match human needs and to align with human and social values. Prior works have achieved remarkable successes by learning from human feedback to understand and follow instructions. Nonetheless, these methods are either founded on hand-picked model generations that are favored by human annotators, rendering them inefficient in terms of data utilization and challenging to apply in general, or they depend on reinforcement learning, which often suffers from imperfect reward functions and relies on extremely challenging optimizations. In this work, we propose a novel technique, Chain of Hindsight, that is easy to optimize and can learn from any form of feedback, regardless of its polarity. Our idea is inspired by how humans learn from extensive feedback presented in the form of languages. We convert all types of feedback into sequences of sentences, which are then used to fine-tune the model, allowing us to take advantage of the language comprehension capabilities of language models. We condition the model on a sequence of model generations paired with feedback. By doing so, the model is trained to generate outputs based on feedback, while learning to identify and correct negative attributes or errors. Applying our method to large language models, we observed that Chain of Hindsight significantly surpasses previous methods in aligning language models with human preferences. We report significant improvements on summarization and dialogue benchmarks, with our approach markedly preferred in human evaluations. 1 Introduction Large language models have achieved amazing results in natural language understanding (Radford et al., 2018, 2019; Brown et al., 2020). However, in order to ensure that these technologies have a positive impact on society, it is of paramount importance for them to be aligned with human values. One of the most critical elements in achieving this is the use of human feedback. Human feedback allows us to evaluate the performance of such models in a way that is both objective and subjective. It can help to identify issues with accuracy, fairness, and bias, and can provide insights into how the model can be improved, in order to ensure that the model outputs align with societal norms and expectations. Driven by the importance of incorporating human feedback into language models, researchers have been developing and testing various methods for human-in-the-loop systems. These methods aim to make the process of incorporating human feedback more efficient, resulting in models that are able to achieve improved performance and accuracy, while also providing higher fairness and more ethical outputs (Hancock et al., 2019; Perez et al., 2019; Yi et al., 2019; Ouyang et al., 2022; Schulman et al., 2022, inter alia). The successes in language modeling have been largely attributed to the utilization of supervised finetuning (SFT) and Reinforcement Learning with Human Feedback (RLHF) techniques. While these approaches have demonstrated promising results in enhancing the performance of language models on specific tasks, they also suffer from notable limitations. SFT relies on human-annotated data and positive-rated model generation to fine-tune a pretrained language model. However, this approach is heavily reliant on the availability of labeled data, which may entail significant expenses and time investments. Moreover, relying \footnote{https://github.com/lhao499/chain-of-hindsight} Figure 1: Human evaluation pairwise comparison between CoH and various approaches on the summarization and dialogue tasks. Base denotes the pretrained model, SFT-U denotes SFT with unlikelihood loss, C-SFT denotes conditional SFT. CoH substantially outperform reinforcement learning from human feedback (RLHF) and supervised finetuning baselines. solely on positive-rated data may constrain the model’s ability to identify and correct negative attributes or errors, thus reducing its generalizability to new and unseen data. Alternatively, RLHF enables learning from all data, regardless of feedback rating. Nonetheless, this method requires learning a reward function, which may be subject to misalignment and imperfections [Gao et al., 2023]. In addition, the optimization of reinforcement learning algorithms can be challenging, presenting significant difficulties in its application. In this work, we aim to overcome the limitations of SFT and RLHF by combining their strengths to leverage all feedback, without resorting to reinforcement learning. Our key idea is that humans are capable of learning from rich and detailed feedback in the form of comparisons. Our hypothesis is that by conditioning language models on a sequence of generations paired with feedback and training them accordingly, they can learn to identify and correct errors and negative attributes. Moreover, prior research has underscored the efficacy of pretrained language models for both in context learning and instruction tuning [Radford et al., 2019; Brown et al., 2020; Wei et al., 2021, inter alia]. Building upon these insights, we introduce a novel approach: converting all human feedback into a sequence and subsequently finetuning models to comprehend and effectively utilize such feedback. Specifically, we propose finetuning the model to predict outputs while conditioning on one or more model outputs and their corresponding feedback in the form of comparisons to the other outputs. In essence, our approach finetunes the model by conditioning it to generate outputs while taking into account one or more model-generated outputs and their associated feedback, presented in the form of comparisons to other outputs. During the training phase, the model is given feedback expressions like ‘A unhelpful answer’ and ‘A helpful answer’. It is then tasked with predicting outputs that align more closely with the feedback, such as in the following example: ‘How can you explain neural networks to a 6-year-old?’ A unhelpful answer: {a subpar answer} A helpful answer: {an excellent answer}.’ Furthermore, our framework allows for the integration of natural language feedback, such as ‘{a subpar answer} is a less preferred answer compared with {an excellent answer}’, which not only informs the model the preference but also provides additional task-specific guidance. At inference time, when presented with positive feedback indicated by ‘A helpful answer’, the model is guided to generate the desired outputs, thereby ensuring a preferable behavior. Our proposed approach enables models to learn from both positive and negative feedback, allowing the identification and correction of negative attributes or errors. We name our method Chain of Hindsight (CoH) as it conditions on a sequence of hindsight feedback. We conducted comprehensive evaluations of our approach in the domains of summarization and dialogue tasks, revealing substantial performance enhancements compared to SFT and its various iterations, as well as RLHF, across both automated assessments and human evaluations. Our main contributions are twofold: (a) We introduce a novel learning framework, referred to as CoH, which effectively harnesses all available feedback data to enhance model performance without necessitating reliance on RLHF. Notably, our approach CoH maintains the same training objective as pretraining, rendering it straightforward to train and readily scalable; (b) Figure 2: Chain of Hindsight (CoH) turns human preferences into rich and detailed feedback in the form of comparisons. In the diagram, we explain this by showing that a question is being prompted to GPT model. The model then generates a multitude of responses, which are subsequently ranked according to human preferences (e.g., A is less preferred compared with B). Subsequently, we construct CoH sequences by converting human preference into natural language feedback and combine them with the model’s outputs. These constructed sequences are then employed in the finetuning phase, aligning with the same objectives as in the pretraining phase. We conduct extensive experiments to showcase the effectiveness of our method in comparison to existing baselines, including state-of-the-art RLHF methods. 2 CHAIN OF HINDSIGHT Our goal is to improve the performance of a Transformer-based language model by leveraging human-rated data and feedback, and to achieve this, we propose a novel approach that goes beyond conventional SFT methods and RLHF methods. Turning all feedback into a sequence. Our approach aims to take into account all feedback and instructions provided by humans. To achieve this, we present the model with a sequence of model generations, along with corresponding feedback and explanations provided by humans. Our approach uses a conventional Transformer model architecture that is causal and decoder-only, as proposed in the work of Brown et al. [2020] Vaswani et al. [2017] on attention mechanisms. This means that at each timestep, the model can only attend to the past timesteps and itself. Given a text represented by tokens \( x = [x_1, \ldots, x_n] \), the standard causal language modeling objective is defined to maximize the log likelihood of \( x \) autoregressively: \[ \log p(x) = \log \prod_{i=1}^{n} p(x_i | x_{<i}). \] In CoH, we construct \( x \) by combining multiple model outputs with feedback which are then used for instruction finetuning. For instance, when a model is prompted to explain neural networks to a child, it generates multiple responses to the prompt. These responses are then combined together into a sequence and paired with feedback instructions generated based on human ratings. An example is illustrated in Figure 2. During the training phase, the model is presented with both positive and negative feedback denoted as ‘Bad’ and ‘Good’, and the model is conditioned to predict outputs that better match the latter feedback such as ‘How to explain neural networks to a 6 year old? Bad: {a bad answer} Good: {a good answer}.’ Furthermore, our framework allows for the integration of natural language feedback, such as ‘How can you explain neural networks to a 6-year-old? Bad: {a subpar answer} Good: {an excellent answer}’, which provides additional task-specific guidance and context. By incorporating a wider range of diverse positive and negative feedback, it further enhances the model’s performance. In this study, we opted for templated feedback generated from ratings rather than open-ended feedback from humans in the loop. The feedback type varies depending on the task, we list the contextual natural language feedback in Appendix B. Natural language feedback examples A good summary: {positive}, a worse summary: {negative} You are a helpful assistant: {positive}, you are an unhelpful assistant: {negative} A bad answer is {negative}, a good answer is {positive} In theory, one could employ open-ended feedback from humans in the loop. However, for this study, we chose to generate feedback using pre-determined templates based on ratings. During the inference phase, we prompt the model with positive feedback in the form of ‘Good’ to guide the model in generating favorable outputs. To enable models to learn from feedback, we require the model to predict each token \( x_i \in x \) that are generated by the model. Loss is not applied on other tokens because it hinders model generation at inference time. This is achieved through masking, which can be expressed as: \[ \log p(x) = \log \prod_{i=1}^{n} 1_{O(x)}(x_i) \cdot p(x_i | [x_j]_{j=0}^{i-1}), \text{ where } 1_{O(x)}(x_i) \text{ denotes whether token } x_i \text{ is not part of the hindsight feedback. In other words, it is 1 if } x_i \text{ is not part of the feedback and 0 if it is part of the feedback. The model is trained to predict each non-feedback token } x_i \text{ given the previous tokens } [x_j]_{j=0}^{i-1}. \] **Algorithm 1** Aligning language models from feedback with Chain of Hindsight. **Required:** Pretrained Language Model M, Human Feedback Dataset D **Required:** Maximum training iterations \( n \) Initialize for \( iter = 1 \) to \( n \) do Randomly sample a minibatch of model outputs and their associated ratings from dataset \( D \). Construct training sequences by combining sampled model outputs with feedback based on ratings. Instruct finetune model \( M \) on the training sequences. end for **Training.** We work with a dataset of model outputs and their corresponding human preference, such as positive and negative ratings, from which we sample minibatches of model outputs. To generate hindsight feedback in natural language, we randomly sample a feedback format and incorporate the human ratings. We combine the hindsight feedback and model outputs into a chain of hindsight, which serves as the input for our autoregressive model. The objective is to predict the input sequence autoregressively, and we use cross-entropy loss to optimize the model. We average the loss over each timestep in the last model output sequence. In the regime of human preference learning, the positive and negative data often being similar to each other (e.g., the Anthropic helpful and harmless dataset). Since CoH condition the model on an example when predicting another one, the model can simply ‘copy’ the example without learning to understand the underlying task. To address this, we randomly mask between 0% and 5% of past tokens during training, which help regularize the model and prevent it from overfitting to the specific examples seen during training (Srivastava et al., 2014; Liu et al., 2022). In order to retain model’s performance on general language modeling tasks, we added a regularization term which maximize the log likelihood of the pretraining dataset following prior works (Ouyang et al., 2022). We apply this technique to our method and all baselines in evaluation. Our approach is shown in Figure 2 and the algorithm is summarized in Algorithm 1. ### 2.1 Relation to Prior Paradigms We discuss the connections of CoH to prior paradigms of learning from preference data. **Supervised finetuning (SFT).** SFT is a commonly used method for preference learning, involving the use of positively labeled data for finetuning (Ouyang et al., 2022; Schulman et al., 2022). Our approach, however, diverges from SFT by incorporating both positive and non-positive rated data, as well as utilizing feedback input. In comparison to SFT, CoH leverages a broader spectrum of information. **Conditional SFT.** This method shares similarities with the Decision Transformer model (Chen et al., 2021), which involves conditional training of SFT with feedback serving as prefix tokens. In essence, both CoH and Conditional SFT utilize feedback tokens as conditional input. Nonetheless, the distinction lies in CoH’s utilization of a sequence of feedback-example pairs, enabling our approach to condition on a more comprehensive information when making predictions. **SFT with unlikelihood.** SFT with unlikelihood introduces an unlikelihood loss on negatively rated data (Welleck et al., 2019; Li et al., 2019) to the traditional SFT framework. **Reinforcement learning with human feedback (RLHF).** RLHF (Schulman et al., 2022; Ouyang et al., 2022; Stiennon et al., 2020) entails the acquisition of a reward function based on human preferences and the use of reinforcement learning to maximize this reward. In contrast to RLHF, CoH offers a substantially simpler training process, and as our experimental evaluations will demonstrate, it consistently outperforms RLHF in terms of performance. ### 3 Evaluation Setup **Training Datasets.** We use a combination of three datasets for learning from human feedback. The three datasets are: - **WebGPT.** The WebGPT dataset (Nakano et al., 2021)[2] includes a total of 19,578 comparisons where each example comprises a question, a pair of model answers, and metadata. The answers are rated by humans with a preference score, which helps to identify the better of the two answers. - **HH.** The Anthropic’s Helpful and Harmless (HH) dataset (Ganguli et al., 2022; Bai et al., 2022a) contains human rated dialogues[3]. Each example in this dataset consists of a pair of conversations between a human and a language model, and one of the two conversations is labeled as preferred by human labelers. - **Summarization.** The summarization dataset (Stiennon et al., 2020) consists of feedback from humans regarding the summarizations generated by a model[4]. Human evaluators were requested to choose the superior summary from two options presented to them. **Evaluation Benchmark and Metrics.** We consider both automatic evaluation and human evaluation on summarization and dialogue benchmarks. - **Summarization Benchmark.** Following prior RLHF works (Stiennon et al., 2020; Nakano et al., 2021; Bai et al., 2022a), we consider automatic evaluation and human evaluation on the TL;DRs dataset (Völske et al., 2017). The original TL;DR dataset contains about 3 million posts from reddit.com across a variety of topics (subreddits), as well summaries of the posts written by the original poster (TL;DRs). We use the filtered version provided by Stiennon et al. (2020), which contains 123,169 posts. We evaluate the performance on the validation set. For evaluation metrics, labelers rated summaries for coverage (how much important information from the original post is covered), accuracy (to what degree the statements in the summary are part of the post), coherence (how easy the summary is to read on its own), and overall quality. More details about evaluation dimensions and instructions for human labelers are available in Appendix A. - **Dialogue Benchmark.** We also evaluate on the validation split of the Anthropic’s Helpful and Harmless (HH) dataset (Ganguli et al., 2022; Bai et al., 2022a), where each example comprises a pair of conversations between a human and a large language model, with one of the two conversations preferred by a human. For evaluating the dialogue, we consider metrics such as helpfulness and harmlessness. A helpful model should follow instructions and infer intention from a few-shot prompt or another interpretable pattern. Since the intention of a given prompt can be unclear or ambiguous, we rely on judgment from our labelers, and the main metric we use is the labelers’ preference ratings. To collect data for our evaluation, it would be too costly and time-consuming to deploy our finetuned model to chat with humans. Instead, we construct “pseudo” dialogues using positive examples. We replace each model response from a previous dialogue with our model’s output, generated by conditioning the model on the human response and past model outputs. We take this approach instead of having humans directly chat with the --- [2] https://huggingface.co/datasets/openai/webgpt_comparisons [3] https://huggingface.co/datasets/Anthropic/hh-rlhf [4] https://huggingface.co/datasets/openai/summarize_from_feedback finetuned model to reuse human-generated data, as collecting interactive data can be very costly and is prone to low data quality issues. More details about evaluation dimensions and instructions for human labelers are available in Appendix A. **Baselines.** Our primary baselines are SFT, SFT with unlikelihood (denoted as SFT-U), conditional SFT (denoted as C-SFT), and RLHF, for connections between them and CoH please refer to Section 2.1. We use GPT-J 6B (Wang and Komatsuzaki, 2021) and OPT (Zhang et al., 2022) as the base pretrained models, while other language models can also be used. Following prior works (Ouyang et al., 2022; Schulman et al., 2022), we adopt the PPO algorithm (Schulman et al., 2017) to implement RLHF baseline. We tune the hyperparameters of PPO and reward learning to obtain the best possible results. To ensure a fair comparison, we carefully tune the training hyperparameters for all other baselines. ### 4 RESULTS Our main goal in conducting these evaluations is to assess the effectiveness of our proposed methodology, which focuses on summarization and dialogue benchmarks. We conduct both automatic and human evaluations, in order to benchmark our approach against established baselines, including SFT, conditional SFT, SFT with unlikelihood, and RLHF approach (Ouyang et al., 2022; Schulman et al., 2022). **Evaluation on summarization.** In Figure 3, we present the ROUGE scores of our models on test set of summarization dataset. Our proposed approach, CoH, substantially outperform baselines, including based pretrained model, SFT, conditional SFT, SFT with unlikelihood, and RLHF. Despite the simplicity of our approach, CoH outperforms RLHF across all the metrics. We notice that RLHF performs the second best, with conditional SFT closely follows behind. To further evaluate the performance of our proposed approach, we conducted human evaluation as shown in Table 1. Base denotes the pretrained model, SFT-U denotes SFT with unlikelihood, C-SFT denotes conditional SFT. We conducted pairwise comparisons between CoH and the baselines because we found that doing so is an easier task for human labelers compared to evaluating multiple | Metric | Pretrained | SFT | Conditional SFT | SFT unlikelihood | RLHF | CoH | |--------|------------|-----|-----------------|------------------|------|-----| | Rouge 1| | | | | | | | Rouge 2| | | | | | | | Rouge L| | | | | | | | Avg | | | | | | | **Table 1: Pairwise human evaluation on summarization task.** | Human evaluation win rate (%) | Base | Tie | CoH | Δ | |-------------------------------|------|-----|-----|-----| | Accuracy | 24.5 | 26.8| 48.7| 24.2| | Coherence | 15.6 | 18.5| 65.9| 50.3| | Coverage | 19.6 | 22.4| 58.0| 38.4| | **Average** | 19.9 | 22.6| 57.5| **37.6**| | SFT | Tie | CoH | Δ | |-----|-----|-----|-----| | Accuracy | 25.5 | 32.6 | 41.9 | 16.4 | | Coherence | 30.5 | 25.6 | 43.9 | 13.4 | | Coverage | 28.5 | 25.4 | 46.1 | 17.6 | | **Average** | 28.2 | 27.9 | 44.0 | **15.8** | | C-SFT | Tie | CoH | Δ | |-------|-----|-----|-----| | Accuracy | 26.7 | 34.9 | 38.4 | 11.7 | | Coherence | 32.5 | 22.9 | 44.6 | 12.1 | | Coverage | 29.5 | 26.7 | 43.8 | 14.3 | | **Average** | 29.6 | 28.2 | 42.3 | **12.7** | | SFT-U | Tie | CoH | Δ | |-------|-----|-----|-----| | Accuracy | 18.7 | 17.9 | 63.4 | 44.7 | | Coherence | 21.8 | 15.8 | 62.4 | 40.6 | | Coverage | 23.6 | 17.2 | 59.2 | 35.6 | | **Average** | 21.4 | 17.0 | 61.7 | **40.3** | | RLHF | Tie | CoH | Δ | |------|-----|-----|-----| | Accuracy | 31.8 | 29.5 | 38.7 | 6.9 | | Coherence | 31.6 | 20.5 | 47.9 | 16.4 | | Coverage | 28.9 | 21.9 | 49.2 | 20.3 | | **Average** | 30.8 | 24.0 | 45.3 | **14.5** | Figure 4: **Evaluation on dialogue.** Comparing CoH with RLHF and SFT baselines. The metric is the accuracy of classifying the preferred dialogue. We hired 75 human labelers who were proficient in English from a third-party platform to provide ratings. In the pairwise comparison, human labelers were presented with two summaries, one generated by the baseline and the other generated by CoH. They were instructed to select the best (or tie) among the two according to the three metrics mentioned above. The metrics are accuracy, coherency and coverage following prior works (Ouyang et al., 2022). We used the same instructions therein, and additional instructions are provided in Appendix A. Table 1 presents the human evaluation results on summarization task. CoH substantially outperform RLHF and conditional SFT, showcasing the effectiveness of CoH in aligning language models with human preferences. **Evaluation on dialogue.** We evaluate our method on the HH dataset, by testing its ability to classify which of a dialogue pair is preferred. Figure 4 presents the comparison between baselines and our method. SFT shows substantially improvement over base pretrained model; adding unlikelihood degrades performance which indicates unlikelihood hurts model generation ability; conditional SFT shows improvement over SFT, showcasing the benefit of learning from negative examples; RLHF performs second best and is substantially outperformed by our CoH. The results demonstrate the effectiveness of CoH in learning from preferences. We further evaluate on the dialogue task based on HH dataset. We use the same setting of 75 human labelers and pairwise comparison as in the summarization human evaluation. For this task, we provide human labelers with instructions to evaluate whether the answer is helpful and harmless (Bai et al., 2022a). The results are presented in Table 2. CoH substantially outperform RLHF and conditional SFT, showcasing the effectiveness of CoH in aligning language models with human preferences. **Language feedback.** We enhance the effectiveness of our approach by evaluating its performance in the context of binary feedback alone, as opposed to the combination of binary feedback and fine-grained language feedback, which is the default setting of our method. We denote this baseline without natural language feedback as CoH w/o LF. To assess the performance of these variations, we conducted a human evaluation task focused on the summarization domain, employing the input of 75 human evaluators. The outcomes, as presented in Table 3, show that both our default approach and our ‘w/o LF’ variant substantially outperform RLHF. In addition, our findings indicate that the inclusion of natural language feedback enhances the results. Human preference ratings show a 14.1% | Human evaluation win rate (%) | Base | Tie | CoH | Δ | |-------------------------------|------|-----|-----|-----| | Helpful | 15.8 | 34.8| 49.4| 33.6| | Harmless | 14.5 | 35.9| 49.6| 35.1| | **Average** | 15.2 | 35.3| **49.5**| **34.4**| | SFT | Tie | CoH | Δ | |-----|-----|-----|-----| | Helpful | 19.6 | 45.7 | 34.7 | 15.1 | | Harmless | 18.6 | 37.4 | 44.0 | 25.4 | | **Average** | 19.1 | 41.5 | **39.4** | **20.3** | | C-SFT | Tie | CoH | Δ | |-------|-----|-----|-----| | Helpful | 21.8 | 46.9 | 31.3 | 9.5 | | Harmless | 22.4 | 35.2 | 42.4 | 20.0 | | **Average** | 22.1 | 41.0 | **36.8** | **14.7** | | SFT-U | Tie | CoH | Δ | |-------|-----|-----|-----| | Helpful | 13.4 | 31.3 | 55.3 | 41.9 | | Harmless | 14.5 | 28.7 | 56.8 | 42.3 | | **Average** | 13.9 | 30.0 | **56.0** | **42.1** | | RLHF | Tie | CoH | Δ | |------|-----|-----|-----| | Helpful | 25.8 | 40.8 | 33.4 | 7.6 | | Harmless | 20.9 | 38.8 | 40.3 | 19.4 | | **Average** | 23.4 | 39.8 | **36.9** | **13.5** | preference for models with language feedback, whereas models without language feedback received an 11.6% preference. The results demonstrate the effectiveness of our CoH framework. Since the framework of CoH offers flexibility to incorporate natural language feedback into training, designing more effective natural language feedback is one of our future directions. **Evaluation on model scaling trend.** To assess the efficacy of CoH across various model sizes, we conducted a comprehensive evaluation. The findings in Figure 5 demonstrate the impact of varying model sizes on the performance of the CoH method relative to SFT baselines and RLHF. Notably, for smaller model sizes, CoH exhibits a marginal decrement in performance compared to SFT baselines. However, as the model size increases, CoH consistently surpasses all SFT and RLHF baselines and displays a positive scaling trend, indicating its efficacy in enhancing model performance as model complexity increases. ### 5 RELATED WORK **Learning from hindsight.** In this paper we explore learning from chains of hindsight with human feedback, an approach that enables a model to learn from errors and revise generations. The key idea of learning from hindsight experience was explored in goal conditioned RL [Kaelbling (1993), Andrychowicz et al. (2017), Schaul et al. (2015)]. Andrychowicz et al. (2017) proposes hindsight experience replay (HER) to relabel rewards and transitions retroactively to learn from sparse feedback. While HER relies on reinforcement learning and a distance function to learn from hindsight experience, we propose a new method called CoH that constructs a chain of hindsight experience using human feedback and finetunes the model directly. Our approach offers several advantages over other methods, such as HIR [Zhang et al. (2023)], which also makes use of incorrect model outputs. HIR can be seen as a special case of CoH with a length of one chain-of-hindsight. Unlike HIR, which employs a complex training process involving likelihood loss, contrastive loss, and entropy loss, our approach is straightforward and easy to implement. Concurrently, Korbak et al. (2023) studies conditioning on human preference during pretraining and shows improved performance in aligning language models with human preference. Their method is similar to CoH with a length of one chain-of-hindsight. Our work focuses on finetuning pretrained language models while Korbak et al. (2023) focuses on improving pretraining. **Learning from human feedback.** Prior work have explored using human feedback to improve various tasks, such as summarization [Böhm et al. (2019), Ziegler et al. (2019), Stiennon et al. (2020)], dialogue [Yi et al. (2019), Hancock et al. (2019), Bai et al. (2022a,b)]. ![Figure 5: Model scaling trend. Comparing CoH with RLHF and SFT baselines on summarization benchmark with different model sizes. CoH outperforms RLHF, showing strong scaling capabilities.](image) | Average win rate (%) | |----------------------| | RLHF | Tie | CoH | | 30.8 | 24.0 | 45.3 | | RLHF | Tie | CoH w/o LF | | 32.1 | 26.5 | 42.4 | | CoH w/o LF | Tie | CoH | | 10.6 | 74.3 | 15.1 | Table 3: Ablation study of natural language feedback on summarization task based on human evaluation. translation (Kreutzer et al., 2018; Bahdanau et al., 2016), semantic parsing (Lawrence and Riezler, 2018), story generation (Zhou and Xu, 2020), review generation (Cho et al., 2018), evidence extraction (Perez et al., 2019), and instruction following (Ouyang et al., 2022; Bai et al., 2022a). The main techniques behind them can be categorized as supervised finetuning (SFT) or training on filtered human annotations and learning a reward function from human feedback for reinforcement learning, which is often dubbed as RLHF (Christiano et al., 2017; MacGlashan et al., 2017; Lee et al., 2021; Warnell et al., 2017) and has been used to train RL agents without the need for hand-designed rewards. Ouyang et al. (2022) demonstrates improved language model alignment performance by training models with SFT and RLHF using human feedback. Our work belongs to the category of SFT, and differs from SFT in that our method conditions on feedback and can learn from examples without positive ratings. Our method is complementary to RLHF and can be directly combined together for further improvement. Using instructions to provide models with human preference and desired behaviors is demonstrated in Bai et al. (2022b), where models are prompted with a set of statements/principles and are trained with RLHF. In our work, we provide models with a sequence of model outputs and their feedback and train models to generate desired outputs conditioned on feedback/control tokens. Instruction finetuning and conditional training. Finetuning on chain of hindsight using human feedback is akin to instruction finetuning. Driven by the impressive in-context learning ability of large language models, finetuning pretrained models on instructions has been shown to improve language models in many benchmarks (see e.g., Wang et al., 2022; Mishra et al., 2021; Ye et al., 2021; Chung et al., 2022; Wei et al., 2021; Sanh et al., 2021; Zelikman et al., 2022; Huang et al., 2022 inter alia). Mostly the instructions are reformatted examples from NLP benchmarks (e.g., Wei et al., 2021; Chung et al., 2022). CoT prompts (Wei et al., 2022) are widely considered as instructions in prior works (Chung et al., 2022; Wei et al., 2021), specifically in the form of step by step explanations written by humans. In relation to these, our chain of hindsight consists of human written hindsight feedback and ranked model outputs. Conditional training (Keskar et al., 2019; Ficler and Goldberg, 2017; Laskin et al., 2022; Chen et al., 2021; Fan et al., 2018; Lu et al., 2022) explores conditioning the model on some control tokens for controllable generations. In relation to it, CoH generalizes to condition on a sequence of control tokens instead of one control token. By doing so, CoH enables the model to understand the differences between control tokens and their corresponding outputs. Our work suggests a promising direction of using hindsight feedback to construct instructions from model outputs, and can be combined with prior instruction finetuning and conditional training works for further improvements. 6 CONCLUSION In conclusion, we introduce Chain of Hindsight (CoH), which is inspired by how humans learn from rich feedback in the form of comparison. We condition language models on a sequence of hindsight feedback, allowing them to effectively leverage all examples regardless of their preference score. Extensive experiments on summarization and dialogue datasets show that CoH substantially outperform RLHF and other baselines. Limitations and Future Work. Although our method substantially outperform baselines, it does have some limitations that need to be addressed: - Constructing CoH may result in long sequences, particularly with multiple feedback instances, leading to increased training computational expenses. - Our work heavily relies on hired human labelers for evaluation due to their higher reliability compared to automated metrics. However, this approach incurs substantial costs, although this issue is not unique to our method. In terms of future prospects, our CoH-based training from human feedback opens the door to exciting possibilities, such as integrating external environment feedback like unit tests and extending its applicability to various domains. Furthermore, our current focus on learning from hindsight using preexisting preferences paves the way for exploration in online preference learning, enabling iterative model improvements. ACKNOWLEDGMENTS This project is supported in part by Office of Naval Research grant N00014-21-1-2769 and SNSF Postdoc Mobility Fellowship 211086 and ONR MURI N00014-22-1-2773. We express our gratitude to the BAIR communities for their insightful discussions and feedback. We thank Google TPU Research Cloud for granting us access to TPUs. REFERENCES Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. *Advances in neural information processing systems*, 30, 2017. Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. *arXiv preprint arXiv:2112.00861*, 2021. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. An actor-critic algorithm for sequence prediction. *arXiv preprint arXiv:1607.07086*, 2016. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. *arXiv preprint arXiv:2204.05862*, 2022a. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. *arXiv preprint arXiv:2212.08073*, 2022b. Florian Böhm, Yang Gao, Christian M Meyer, Ori Shapira, Ido Dagan, and Iryna Gurevych. Better rewards yield better summaries: Learning to summarise without references. *arXiv preprint arXiv:1909.01214*, 2019. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. *Advances in neural information processing systems*, 34:15084–15097, 2021. Woon Sang Cho, Pengchuan Zhang, Yizhe Zhang, Xiujun Li, Michel Galley, Chris Brockett, Mengdi Wang, and Jianfeng Gao. Towards coherent and cohesive long-form text generation. *arXiv preprint arXiv:1811.00511*, 2018. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In *Advances in Neural Information Processing Systems*, pages 4299–4307, 2017. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. *arXiv preprint arXiv:2210.11416*, 2022. Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. *arXiv preprint arXiv:1805.04833*, 2018. Jessica Ficler and Yoav Goldberg. Controlling linguistic style aspects in neural language generation. *arXiv preprint arXiv:1707.02633*, 2017.
kWsJkH1tNi
Based on my understanding of the analysis, I assume that the same approach can be applied to the number of local steps per round if each data point would participate in just one local round. As a result, the number of local steps would also appear in the bound. It would be interesting to see its effect, as in the experiments of the paper, the total number of SGD steps is fixed, and with the increase of R, the number of local steps decreases.
Federated Learning, Lessons from Generalization Study: You May Need to Communicate Less Often Anonymous authors Paper under double-blind review Abstract We investigate the generalization error of statistical learning models in a Federated Learning (FL) setting. Specifically, we study the evolution of the generalization error with the number of communication rounds between the clients and the parameter server, i.e., the effect on the generalization error of how often the local models as computed by the clients are aggregated at the parameter server. In our setup, the more the clients communicate with the server the less data they use for local training at each round. We establish PAC-Bayes and rate-distortion theoretic bounds on the generalization error that account explicitly for the effect of the number of rounds, say $R \in \mathbb{N}^*$, in addition to the number of participating devices $K$ and individual datasets size $n$. The bounds, which apply in their generality for a large class of loss functions and learning algorithms, appear to be the first of their kind for the FL setting. Furthermore, we apply our bounds to FL-type Support Vector Machines (FSVM); and we derive (more) explicit bounds on the generalization error in this case. In particular, we show that the generalization bound of FSVM increases with $R$, suggesting that more frequent communication with the parameter server diminishes the generalization power of such learning algorithms. This implies that comparatively with the empirical risk, the population risk decreases less faster with $R$. Moreover, our bound suggests that for every $R$, the generalization error of the FSVM setting decreases faster than that of centralized learning by a factor of $\mathcal{O}(\sqrt{\log(K)/K})$, thereby generalizing recent findings in this direction for $R = 1$ (known as “one-shot” FL) to arbitrary number of rounds. Furthermore, we also provide results of experiments that are obtained using neural networks (ResNet-56) which show evidence that not only may our observations for FSVM hold more generally, but also that the population risk may even start to increase beyond some value of $R$. 1 Introduction A major focus of machine learning research over recent years has been the development of statistical learning algorithms that can be applied to spatially distributed data. In part, this is due to the emergence of new applications for which it is either not possible (due to lack of resources [Zinkevich et al., 2010, Kairouz et al., 2021]) or not desired (due to privacy concerns [Truex et al., 2019, Wei et al., 2020, Mothukuri et al., 2021]) to collect all the data at one point prior to applying a suitable machine learning algorithm to it [Verbraeken et al., 2020, McMahan et al., 2017, Konečný et al., 2016, Kasiviswanathan et al., 2011]. One popular such algorithm is Federated-Learning (FL) [McMahan et al., 2017, Yang et al., 2018, Kairouz et al., 2019, Li et al., 2020, Reddi et al., 2020, Karimireddy et al., 2020, Yuan et al., 2021]. In FL, there are $K$ clients or devices that hold each their own dataset and collaborate to collectively train a (global) model without sharing their data. The distributions of the clients’ data can be identical (homogeneous) or different (heterogeneous). The model is depicted in Figure 1 and described in Section 2. An FL algorithm typically proceeds in $R \in \mathbb{N}^*$ communication rounds: at each round, every device by using some potentially different local algorithms, e.g., Stochastic Gradient Descent (SGD), produces a local model. Then, all these local models of all clients are sent to the parameter server (PS) which aggregates them into a (global) model. This aggregated model is sent back to clients and used typically as an initialization point for the local algorithms in the next round, although more general forms of statistical dependencies are allowed. The multi-round interactions between clients and PS are critical to the FL algorithm. Despite its importance, however, little is known about its effect on the performance of the algorithm. In fact, it was shown theoretically [Stich, 2019, Haddadpour et al., 2019, Qin et al., 2022], and also observed experimentally therein, that in FL-type algorithms the empirical risk generally decreases with the number of rounds. This observation is sometimes over-interpreted and it is believed that more frequent communication of the devices with the PS is generally beneficial for the performance of FL-type algorithms during inference or test phases. This belief was partially refuted in a very recent work (Chor et al., 2023) where it was shown that the generalization error may increase with the number of rounds. Their result, which is obtained by studying the evolution of a bound on the generalization error that they developed for their setup, relies strongly, however, on the assumed assumptions, namely (i) linearity of the considered Bregman divergence loss with respect to the hypothesis, (ii) all the devices are constrained to run an SGD with mini-batch size one at every iteration and (iii) linearity of the aggregated model with respect to the devices’ individual models. Also, even for the restricted setting considered therein, their bound on the generalization error (Chor et al., 2023 Theorem 1), which is essentially algebraic in nature, does not exploit fully the distributed architecture of the learning problem. Moreover, the dependence on the number of rounds is somewhat buried, at least in part, by decomposing the problem into several virtual one-client SGDs which are inter-dependent among clients and rounds through their initializations. The effect of the multi-round interactions on the performance of FL algorithms remains highly unexplored, however, for general loss functions and algorithms. For example, with the very few exceptions that we discuss in this paper (most of which pertain to the specific case $R = 1$ (Yagli et al., 2020; Barnes et al., 2022a; Sefidgaran et al., 2022a), and with relatively strong assumptions on the loss function and algorithm) the existing literature crucially lacks true bounds on the generalization error for FL-type algorithms, i.e., ones that bring the analysis of the distributed architecture and rounds into the bound, and even less that show some form of explicit dependency on $R$. One central mathematical difficulty in studying the behavior of the expected generalization error is caused by the interactions with the PS inducing statistical correlations among the devices’ models which become stronger with $R$ and are not easy to handle. For example, common tools that are generally applied in similar settings, such as the Leave-one-out Expansion Lemma of Shalev-Shwartz et al. (2010), do not apply easily in this case. Contributions. In this paper, we study the problem of how the generalization error of FL-type algorithms evolves with the number of rounds $R$. Unless otherwise specified (for the specialization to Support Vector Machines in Section 4), we assume no particular form for the devices’ individual algorithms or deterministic aggregation function at the PS. For this general setting: - We establish PAC-Bayes bounds (Theorems 1 and 2) and rate-distortion theoretic bounds (Theorem 3 and Theorem 6 in the appendix) on the generalization error that account explicitly for the effect of the number of rounds $R$, in addition to the number of participating devices $K$ and the size of the dataset $n$. These bounds appear to be the first of their kind for the problem that we study. The established bounds reflect the structure of the distributed interactive learning algorithm in particular by capturing the contribution of each client at each round to the generalization error of the final model. Our bounds are in terms of averages of such contributions among the clients and rounds. In a sense, this validates the intuition that, for a desired generalization error of the final model, some devices may be allowed to overfit during some or all of the rounds as long as other devices compensate for that overfitting. That is, the targeted generalization error level of the final model is suitably split among the devices and rounds. This intuition is also captured, but in a different way, by our lossy bounds of Theorems 2, 5 and 6 in the form of a trade-off between the amounts of “lossy compressions” (or “distortion levels”) of all clients across all rounds. Finally, we notice that the Kullback–Leibler divergence terms in our PAC-Bayes bounds have the advantage of involving priors that are possibly distinct across devices and rounds. This may be beneficial when these terms are used as regularizers during training. This direction, which can be seen as an extension of the centralized online setup of Haddouche & Guedj (2022) is left for future works. - We apply our bounds to Federated Support Vector Machines (FSVM); and derive (more) explicit bounds on the generalization error in this case (Theorem 4). Interestingly, we show that the margin generalization bound of FSVM decreases with $K$ and increases with $R$. In particular, this suggests that more frequent communication with the PS diminishes the generalization power of FSVM algorithms. As a consequence, comparatively with the empirical risk, the population risk decreases less faster with $R$. Besides, our bound suggests that for any $R$, the generalization error of the FSVM setting decreases faster than that of centralized learning by a factor of $O(\sqrt{\log(K)/K})$, thereby generalizing recent findings in this direction (Sefidgaran et al., 2022a) for $R = 1$ to any arbitrary number of rounds. • We validate our theoretical findings for FSVM through experiments. Moreover, we perform similar experiments using neural networks (ResNet-56) and observe that our findings obtained for FSVM also hold in this case. That is: (i) the generalization error increases with the number of rounds $R$, and (ii) due to the tradeoff between the empirical risk and generalization error, there exists potentially an optimal number of rounds $R^*$ that minimizes the population risk. We hasten to mention that the total number of training data points and SGD iterations are kept fixed regardless of the value of $R$; hence, the observed increase in the generalization error cannot be attributed to the classical “overfitting” phenomenon. We remark that in the particular case of FL-based SGD, at a high level, there exists a connection between our setup and LocalSGD (Stich [2019], Haddadpour et al. [2019], Qin et al. [2022], Gu et al. [2023]), which focuses on the problem of parallel computing. The LocalSGD literature, however, mostly reports improvements in convergence rates; and their proof techniques do not seem to be applicable for the study of the generalization error. Our findings suggest that even in a centralized setup one may still achieve some performance gains, from the viewpoint of generalizability and population risk, by splitting the available dataset into smaller subsets, learning from each separately, aggregating the learned models, and then iterating. **Notations.** We denote random variables (r.v.), their realizations, and their domains by upper-case, lower-case, and calligraphy fonts, e.g., $X$, $x$, and $\mathcal{X}$. We denote the distribution of $X$ by $P_X$ and its support by $\text{supp } P_X$. A r.v. $X$ is called $\sigma$-subgaussian, if for all $t \in \mathbb{R}$, $\log \mathbb{E}[\exp(t(X - \mathbb{E}[X]))] \leq \sigma^2 t^2/2$, where $\mathbb{E}$ denote the expectation. As an example, if $X \in [a,b]$, then $X$ is $\frac{b-a}{2\sqrt{2}}$-subgaussian. For two distributions $P$ and $Q$ with the Radon-Nikodym derivative $dQ/dP$ of $Q$ with respect to $P$, the Kullback–Leibler (KL) divergence is defined as $D_{KL}(Q||P) := \mathbb{E}_Q[\log(dQ/dP)]$ if $Q \ll P$ and $\infty$ otherwise. The mutual information between two r.v. $(X,Y)$ with distribution $P_{X,Y}$ and marginals $P_X$ and $P_Y$ is defined as $I(X;Y) := D_{KL}(P_{X,Y} || P_X P_Y)$. The notation $\{x_i\}_{i \in [m]}$ is used to denote a collection of $m$ real numbers. The integer ranges $\{1,\ldots,K\} \subset \mathbb{N}^*$ and $\{K_1,\ldots,K_2\} \subset \mathbb{N}^*$ are denoted by $[K]$ and $[K_1 : K_2]$, respectively. Finally, for $k \in [K]$, we use the shorthand notation $[K]\backslash k := \{1,\ldots,K\}\backslash\{k\}$. ## 2 FORMAL PROBLEM SETUP Consider the $K$-client federated learning model shown in Figure 1. **Datasets.** For $k \in [K]$, let $Z_k$ be some input data for client or device $k$ distributed according to an unknown distribution $\mu_k$ over some data space $Z_k = Z$. For example, in supervised learning settings $Z_k := (X_k, Y_k)$ where $X_k$ stands for a data sample at device $k$ and $Y_k$ its associated label. The distributions $\{\mu_k\}$ are allowed to be distinct across devices. Each client is equipped with its own training dataset $S_k := \{Z_{k,1},\ldots,Z_{k,n}\} \subseteq Z^n$, consisting of $n$ independent and identically distributed (i.i.d.) data points drawn according to the unknown distribution $\mu_k$. We consider an $R$-round learning framework, $R \in \mathbb{N}^*$, where every sample of a training dataset can be used during only one round by the device that holds it, but possibly multiple times during that round. Accordingly, it is assumed that every device partitions its data $S_k$ into $R$ disjoint subsets $S^{(r)}_k$ such that $S_k = \bigcup_{r \in [R]} S^{(r)}_k$ where $S^{(r)}_k$ is the dataset used by client $k \in [K]$ during round $r \in [R]$. This is a reasonable assumption that encompasses many practical situations in which at each round every client has access to new data. For ease of the exposition, we assume that $R$ divides $n$ and let $n/R$ and $S^{(r)}_k := \{Z^{(r)}_{k,1},\ldots,Z^{(r)}_{k,n/R}\}$. Also, throughout we will often find it convenient to use the handy notation $P^{(r)}_k := \mu_k^{\otimes n/R}$, $P^{(r)}_k := \mu_k^{\otimes n/R}$ and, for $k \in [K]$ and $r \in [R]$, $$S^{(r)}_k = S^{(r)}_1, \ldots, S^{(r)}_K,$$ $$S^{(r)}_k = S^{(1)}_k, \ldots, S^{(r)}_k,$$ $$S = S^{(R)}_k.$$ (1) Similar notations will be used for other variables, e.g., $W^{(r)}_k = W^{(r)}_1, \ldots, W^{(r)}_K$ and $\bar{W}^{(r)} = \bar{W}^{(r)}_1, \ldots, \bar{W}^{(r)}_K$. **Overall algorithm.** The devices collaboratively train a (global) model by performing both local computations and updates based on $R$-round interactions with the parameter server (PS). Let $A_k$ denote the algorithm used by device $k \in [K]$. An example is the popular SGD where in round $r$ every device $k$ takes one or more gradient steps with respect to samples from the part $S^{(r)}_k$ of its local dataset $S_k$. It should be emphasized that the algorithms $\{A_k\}$ may be identical or not. During round $r \in [R]$ the algorithm $A_k$ produces a local model $W^{(r)}_k$. At the end of every round $r$, all the devices send their individual models $W^{(r)}_k$ to the PS which aggregates them into a (global) model $\bar{W}^{(r)} \in \mathcal{W}$ and sends it back to them. The reader is referred to Appendix B for some extensions of this setup. aggregated model is used by every device in the next round \((r + 1)\), together with the part \(S_k^{(r+1)}\) of its dataset \(S_k\) in order to obtain a new local model \(W_k^{(r+1)}\). **Local training at devices.** Formally, the algorithm \(A_k\) is a possibly stochastic mapping \(A_k : Z^{n_R} \times W \rightarrow W\); and, for \(r \in [R]\), we have \(W_k^{(r)} := A_k(S_k^{(r)}, W^{(r-1)})\) – for convenience, we set \(W^{(0)} = \emptyset\) or some default value. We denote the conditional distribution induced by \(A_k\) over \(W\) at round \(r\) by \(P_{W_k^{(r)}|S_k^{(r)}, W^{(r-1)}}\). **Model aggregation.** The aggregation function at the PS is set to be deterministic and arbitrary, such that at round \(r\) this aggregation can be represented equivalently by a degenerate conditional distribution \(P_{W^{(r)}|W^{(r)}_1, ..., W^{(r)}_K}\). A common choice is the simple averaging \(\overline{W}^{(r)} = (\sum_{k=1}^K W_k^{(r)})/K\). The above process repeats until all \(R\) rounds are completed, and yields a final (global) model \(\overline{W}^{(R)}\). Let \(W = (W^{[R]}_K, \overline{W}^{[R]})\) where the notation used here and throughout is similar to [1]. **Induced probability distributions.** The above-described algorithm, summarized in Algorithm 1 in the appendices, induces the conditional distribution \[ P_{W|S} = \bigotimes_{r \in [R]} \left\{ \bigotimes_{k \in [K]} \left( P_{W_k^{(r)}|S_k^{(r)}, W^{(r-1)}} \right) P_{W^{(r)}|W^{(r)}_K} \right\}, \] of models, whose joint distribution with \(S\) is \[ P_{S,W} = \bigotimes_{r \in [R]} \left\{ \bigotimes_{k \in [K]} \left( P_{S_k^{(r)}} P_{W_k^{(r)}|S_k^{(r)}, W^{(r-1)}} \right) P_{W^{(r)}|W^{(r)}_K} \right\}. \] Hereafter, we will refer to the aforementioned algorithm for short as being an \((P_{W|S}, K, R, n)\)-FL model. **Generalization error.** Let \(\ell : Z \times W \rightarrow \mathbb{R}^+\) be a given loss function. For a (global) model or hypothesis \(\overline{w}^{(R)} \in W\), its associated empirical and population risks are defined respectively as \[ \hat{\mathcal{L}}(s, \overline{w}^{(R)}) = \frac{1}{nK} \sum_{k=1}^K \sum_{i=1}^{n_R} \ell(z_{k,i}, \overline{w}^{(R)}), \quad \mathcal{L}(\overline{w}^{(R)}) = \frac{1}{K} \sum_{k=1}^K \mathbb{E}_{Z_k \sim \mu_k} [\ell(Z_k, \overline{w}^{(R)})]. \] Note that by letting \(\hat{\mathcal{L}}(s_k^{(r)}, \overline{w}^{(R)}) = \frac{1}{n_R} \sum_{i=1}^{n_R} \ell(z_k^{(r)}, \overline{w}^{(R)})\), the empirical risk can be re-written as \[ \hat{\mathcal{L}}(s, \overline{w}^{(R)}) = \frac{1}{KR} \sum_{r=1}^R \sum_{k=1}^K \hat{\mathcal{L}}(s_k^{(r)}, \overline{w}^{(R)}). \] The generalization error of the model \(\overline{w}^{(R)}\) for dataset \(s = (s_1, \ldots, s_K)\), \(s_k = \bigcup_{r=1}^R s_k^{(r)}\), is evaluated as \[ \text{gen}(s, \overline{w}^{(R)}) = \mathcal{L}(\overline{w}^{(R)}) - \hat{\mathcal{L}}(s, \overline{w}^{(R)}) = \frac{1}{KR} \sum_{r=1}^R \sum_{k=1}^K \text{gen}(s_k^{(r)}, \overline{w}^{(R)}), \] where \(\text{gen}(s_k^{(r)}, \overline{w}^{(R)}) = \mathbb{E}_{Z_k \sim \mu_k} [\ell(Z_k, \overline{w}^{(R)})] - \hat{\mathcal{L}}(s_k^{(r)}, \overline{w}^{(R)})\). **Example (FL-SGD).** An important example is one in which every device runs Stochastic Gradient Decent (SGD) or variants of it, such as mini-batch SGD. In this latter case, denoting by \(e\) the number of epochs and by \(b\) the mini-batch size, at iteration \(t \in [en_R/b]\) client \(k\) updates its model as \[ W_k^{(r)} = \text{Proj}\left(W_k^{(r)}, t + \frac{\eta_{r,t}}{b} \sum_{z \in B_{k,r,t}} \nabla \hat{\ell}(z, W_k^{(r)}_{t-1})\right), \] where \(\hat{\ell} : Z \times W \rightarrow \mathbb{R}^+\) is some differentiable surrogate loss function used for optimization, \(\eta_{r,t} > 0\) is the learning rate at iteration \(t\) of round \(r\), \(B_{k,r,t} \in S_k^{(r)}\) is the mini-batch with size \(b\) chosen at iteration \(t\), and \(\text{Proj}(w') = \arg \min_{w \in W} \|w - w'\|\). Also, in this case we let \(W_k^{(r)} := W_k^{(r)}_{\tau}\), where \(\tau := e n_R/b\). Besides, here the aggregation function at the PS is typically set to be the arithmetic average \(\overline{W}^{(r)} = (\sum_{k \in [K]} W_k^{(r)})/K\). This example will be analyzed in the context of Support Vector Machines in Section 4. ### 3 GENERALIZATION BOUNDS FOR FEDERATED LEARNING ALGORITHMS In this section, we consider a (general) \((P_{W|S}, K, R, n)\)-FL algorithm, as defined formally in Section 2, and study the generalization error of the final (global) hypothesis \(\overline{W}^{(R)}\) as measured by (6). Note that the statistical properties of \(\overline{W}^{(R)}\) are described by the induced distributions (2) and (3). We establish several bounds on the generalization error (6). The bounds, which are of PAC-Bayes type and rate-distortion theoretic, have the advantage of taking the structure of the studied multi-round interactive learning problem into account. Also, they account explicitly for the effect of the number of communication rounds $R$ with the PS, in addition to the number of devices $K$ and the size $n$ of each individual dataset. To the best of our knowledge, they are the first of their kind for this problem. ### 3.1 PAC-BAYES BOUNDS For convenience, we start with two *lossless* bounds, which can be seen as distributed versions of those of [McAllester (1998, 1999); Maurer (2004); Catoni (2003)] tailored specifically for the multi-round interactive FL problem at hand. **Theorem 1.** Assume that the loss $\ell(Z_k, w)$ is $\sigma$-subgaussian for every $w \in W$ and any $k \in [K]$. Also, let for every $k \in [K]$ and $r \in [R]$, $P_{k,r}$ denote a conditional prior on $W_k^{(r)}$ given $W^{(r-1)}$. Then we have: (i) With probability at least $(1 - \delta)$ over $S$, for all $P_W|S$, $E_{W \sim P_W|S}[\text{gen}(S, W^{(R)})]$ is bounded by $$\frac{1}{KR} \sum_{k \in [K], r \in [R]} E_{W^{(r-1)} \sim P_W^{(r-1)}} \left[ D_{KL} \left( P_{W_k^{(r)}|S_k^{(r)}, W^{(r-1)}} \| P_{k,r} \right) + \log(\frac{\sqrt{2n}}{\sqrt{R}\delta}) \right] \leq \frac{(2n/R - 1)/(4\sigma^2)}{2}.$$ (ii) For any FL-model $P_W|S$, with probability at least $(1 - \delta)$ over $(S, W) \sim P_S,W$, $$\text{gen}(S, W^{(R)}) \leq \frac{1}{KR} \sum_{k \in [K], r \in [R]} \log \left( \frac{dP_{W_k^{(r)}|S_k^{(r)}, W^{(r-1)}}}{dP_{k,r}} \right) + \log(\frac{\sqrt{2n}}{\sqrt{R}\delta}) \leq \frac{(2n/R - 1)/(4\sigma^2)}{2}.$$ The proof of Theorem 1, stated in Appendix F.1, judiciously extends the technique of a variable-size compressibility approach that was proposed recently in [Sefidgaran & Zaidi (2023)] in the context of establishing data-dependent PAC-Bayes bounds for a centralized learning setting, i.e., $K = 1$ and $R = 1$, to the more involved setting of FL. This extension is not trivial, however. For example, while the bound of the part (i) of Theorem 1 involves KL-divergence terms that may seem typical of classic PAC-Bayes bounds, the utility of the result is in expressing the bound in terms of average (over clients and rounds) of expected KL-divergence terms, where for every $r \in [R]$ the expectation is over $W^{(r-1)}$. This is important, and non-intuitive, as the FL algorithm is not composed of $K \times R$ independent centralized algorithms. Indeed, a major technical difficulty in the analysis is to account properly for the problem’s distributed nature as well as the statistical “couplings” among the devices’ models, induced by the multi-round interactions. In part, these couplings are accounted for in the bound through the conditioning on $W^{(r-1)}$. We refer the reader to a discussion after Theorem 2, which is a “lossy” version of the result of Theorem 1 on how the proof proceeds to break down the overall FL-algorithm into $K \times R$ inter-dependent “centralized”-like algorithms. It should also be noted that, in fact, one could still consider the $R$-round FL problem end-to-end and view the entire system as a (virtual) centralized learning system with input the collection $S = (S_1, \ldots, S_K)$ of all devices’ datasets and output the final aggregated model $W^{(R)}$ and apply the results of [Sefidgaran & Zaidi (2023)] (or those of McAllester (1998, 1999); Maurer (2004); Catoni (2003); Seeger (2002); Tolstikhin & Seldin (2013)). The results obtained that way, however, do not account for the interactive and distributed structure of the problem. In contrast, note that for example the first bound of Theorem 1 involves, for each device $k$ and round $r$ the KL-divergence term $E_{W^{(r-1)}} \left[ D_{KL} \left( P_{W_k^{(r)}|S_k^{(r)}, W^{(r-1)}} \| P_{k,r} \right) \right]$ which can be seen as accounting for (a bound on) the contribution of the model of that client at that round $r$ to the overall generalization error. In a sense, the theorem says that only the average of those KL divergence terms matters for the bound, which could be interpreted as validating the intuition that some clients may be allowed to “overfit” during some or all of the rounds as long as the overall generalization error budget is controlled suitably by other devices. Also, as it can be seen from the result, the choice of priors is specifically tailored for the multi-round multi-client setting in the sense that the prior of client $k$ at round $r$ could depend on all past aggregated models $W^{(r-1)}$ and $P_k^{(r)}$ is allowed to depend on all local models and datasets till round $(r - 1)$. The result also has implications on the design of practical learning FL systems: for example, on one aspect it suggests that the aforementioned KL-divergence term can be used as a round-dependent regularizer which accounts better for the variation of the quality of the training datasets across devices and rounds. Finally, by viewing our studied learning setup with disjoint datasets along the clients and rounds as some form of distributed *semi-online* process, Theorem 1 may be seen as a suitable distributed version of the PAC-Bayes bound of Haddouche & Guedj (2022) established therein for a centralized online-learning setup. Note that a direct extension of that result to our distributed setup would imply considering the generalization error of client \( k \) at round \( r \) with respect to the intermediate aggregated model \( \bar{w}_k^{(r)} \), not the final \( \bar{w}_k^{(R)} \) as we do. In fact, in that case, the problem reduces to an easier virtual one-round setup that was also studied in Barnes et al. (2022a,b) for Bregman divergences losses and linear and local Gaussian models; but at the expense of analyzing alternate quantity in place of the true generalization error (6) that we study. We now present a more general, lossy, version of the bound of Theorem 1. The improvement is allowed by introducing a proper lossy compression that is defined formally below into the bound. This prevents the new bound from taking large values for deterministic algorithms with continuous hypothesis spaces. **Lossy compression.** Consider a quantization set \( \hat{\mathcal{W}} \subseteq \mathcal{W} \) and let \( \{\hat{W}_k^{(r)}\}_{k \in [K], r \in [R]} \subseteq \hat{\mathcal{W}} \), be a set of lossy versions of \( \{W_k^{(r)}\}_{k \in [K], r \in [R]} \), defined via some conditional Markov kernels \( p_{\hat{W}_k^{(r)}|S_k^{(r)}, W^{(r-1)}} \), i.e., we consider lossy compression of \( W_k^{(r)} \) using only \( S_k^{(r)} \) and the round-\((r-1)\) aggregated model \( W^{(r-1)} \). For a given distortion level \( \epsilon \in \mathbb{R}^+ \), \( \{p_{\hat{W}_k^{(r)}|S_k^{(r)}, W^{(r-1)}}\}_{k \in [K], r \in [R]} \) is said to satisfy the \( \epsilon \)-distortion criterion if following holds: for every \( s \in Z^n^K \), \[ \left| \mathbb{E}_{P_W|s} \left[ \text{gen}(s, W^{(R)}) \right] - \frac{1}{KR} \sum_{k \in [K], r \in [R]} \mathbb{E}_{(P_p)_{k,r}} \left[ \text{gen}(s_k^{(r)}, \hat{W}_k^{(R)}) \right] \right| \leq \epsilon/2, \] where \[ (P_p)_{k,r} = P_{W^{(r-1)}, W_k^{(r)|k}, s_k^{(r-1)}, s_k^{(r)}, \hat{W}_k^{(r)|s_k^{(r)}, W^{(r-1)}}} P_{\hat{W}_k^{(R)|W_k^{(r)}, \hat{W}_k^{(r)}, s_k^{(r+1,R)}}}. \] This condition can be simplified for Lipschitz losses, i.e., when \( \forall w, w' \in \mathcal{W}, |\ell(z,w) - \ell(z,w')| \leq \Omega \rho(w,w') \), for some distortion function \( \rho : \mathcal{W} \times \mathcal{W} \to \mathbb{R}^+ \). Then, a sufficient condition for (8) is \[ \sum_{k \in [K], r \in [R]} \mathbb{E}_{(P_p)_{k,r}} \left[ \rho(W^{(R)}, \hat{W}_k^{(R)}) \right] \leq KR\epsilon/(4\Omega). \] **Theorem 2.** Suppose that \( \ell(z,w) \in [0,C] \subset \mathbb{R}^+ \). Let for every \( k \in [K] \) and \( r \in [R] \), \( P_{k,r} \) be a conditional prior on \( \hat{W}_k^{(r)} \) given \( W^{(r-1)} \). Fix any distortion level \( \epsilon \in \mathbb{R}^+ \). Consider any \( (P_W|S, K, R, n) \)-FL model and any choices of \( \{p_{\hat{W}_k^{(r)}|S_k^{(r)}, W^{(r-1)}}\}_{k \in [K], r \in [R]} \) that satisfy the \( \epsilon \)-distortion criterion. Then, with probability at least \( (1-\delta) \) over \( S \sim P_S \), we have that \( \mathbb{E}_{W \sim P_W|S} \left[ \text{gen}(S, W^{(R)}) \right] \) is upper bounded by \[ \sqrt{\frac{1}{KR} \sum_{k \in [K], r \in [R]} \mathbb{E}_{W^{(r-1)} \sim P_{W^{(r-1)}|S_k^{(r-1)}}} \left[ D_{KL}\left(p_{\hat{W}_k^{(r)}|S_k^{(r)}, W^{(r-1)}}||P_{k,r}\right) \right] + \log(\frac{\sqrt{2n}}{\sqrt{R}\delta}) + \epsilon}. \] A trivial extension of the lossy PAC-Bayes bounds for centralized algorithms could be done by considering the quantization of the final aggregated model \( W^{(R)} \). Here, instead, for every \( k \in [K] \) and round \( r \in [R] \), we quantize the local model \( W_k^{(r)} \) separately while keeping the other devices’ local models at that round, i.e., the vector \( W_k^{(r)} \) fixed. This allows us to study the amount of the “propagated” distortion till round \( R \). Note that by quantizing \( W_k^{(r)} \) for a distortion constraint on the generalization error that is at most \( \nu \), the immediate aggregated model \( \bar{W}_k^{(r)} \) is guaranteed to have a generalization error within a distortion level of at most \( \nu/K \) from the true value. Also, interestingly, the distortion criterion (8) allows to “allocate” suitably the targeted total distortion \( KR\epsilon/2 \) into smaller constituent levels among all clients and across all rounds. **Proof outline:** Theorem 2 is proved in Appendix E.2 by breaking down the overall FL algorithm into \( KR \) “centralized”-like algorithms, using the following high-level steps: i. For every pair \((k, r)\), we define a virtual FL algorithm that is equivalent to the original one except for round \( r \) of client \( k \) for which the “local quantized algorithm” \( p_{\hat{W}_k^{(r)}|S_k^{(r)}, W^{(r-1)}} \) is considered instead of \( P_{W_k^{(r)}|S_k^{(r)}, W^{(r-1)}} \). This overall algorithm is denoted by \((P_p)_{k,r}\) as given by (9). Also, we define the overall algorithm with respect to the prior \( P_{k,r} \). These choices allow us to define \( 2KR \) ‘virtual’ algorithms, each of them differing in one distinct (client, round)-local algorithm from the original. ii. Next, by Lemma 3, we relate the probability that the generalization error exceeds a given threshold to the supremum of the expected average generalization error of all \( KR \)-virtual algorithms \((PP_{k,r})_{k,r}\). The supremum is taken with respect to all joint distributions that are in the vicinity of \( \mu^{\otimes n^K} \). iii. Finally, we apply a change of measure argument by two successive applications of Donsker-Varadhan’s inequality, from \((PP_{k,r})_{k,r}\) to \((PP_{k,r})_{k,r}\) and from \(v_S\) to \(\mu^{\otimes nK}\). Using the special form of \((PP_{k,r})_{k,r}\), the KL-divergences \(D_{KL}((PP_{k,r})_{k,r} || (PP_{k,r})_{k,r})\) lead to the desired KL-divergence terms. This allows in the last step of the proof to bound the cumulant generating function as needed. ### 3.2 Rate-Distortion Theoretic Bounds Define for \(k \in [K], r \in [R]\) and \(\epsilon \in \mathbb{R}\), the rate-distortion function \[ \mathcal{RD}(P_S, W, k, r, \epsilon) = \inf_{P_{W_k^{(r)}|S_k^{(r)}, W^{(r-1)}}} I(S_k^{(r)}; W_k^{(r)}|W^{(r-1)}), \] where the mutual information is evaluated with respect to \(P_{S_k^{(r)}} P_{W^{(r-1)}} P_{W_k^{(r)}|S_k^{(r)}, W^{(r-1)}}\) and the infimum is over all conditional Markov kernels \(P_{W_k^{(r)}|S_k^{(r)}, W^{(r-1)}}\) that satisfy \[ E_{S, W, W^{(R)}} \left[ \text{gen}(S_k^{(r)}, W^{(R)}) - \text{gen}(S_k^{(r)}, W^{(R)}) \right] \leq \epsilon, \] where the joint distribution of \(S, W, W^{(R)}\) factorizes as \(P_S, W P_{W_k^{(r)}|S_k^{(r)}, W^{(r-1)}} P_{W^{(R)}|W_k^{(r)}|k, W^{(r)}}\). **Theorem 3.** For any \((P_W|S, K, R, n)\)-FL model with distributed dataset \(S \sim P_S\), if the loss \(\ell(Z_k, w)\) is \(\sigma\)-subgaussian for every \(w \in W\) and any \(k \in [K]\), then for every \(\epsilon \in \mathbb{R}\) it holds that \[ E_{S, W \sim P_S, W} \left[ \text{gen}(S, W^{(R)}) \right] \leq \sqrt{2\sigma^2 \sum_{k \in [K], r \in [R]} \mathcal{RD}(P_S, W, k, r, \epsilon_k, r)/(nK)} + \epsilon, \] for any set of parameters \(\{\epsilon_k, r\}_{k \in [K], r \in [R]} \subset \mathbb{R}\) which satisfy \(\frac{1}{KR} \sum_{k \in [K]} \sum_{r \in [R]} \epsilon_k, r \leq \epsilon\). Similar to the PAC-Bayes type bounds of Theorem 1 and Theorem 2, the bound of Theorem 3 also shows the “contribution” of each client’s local model during each round to (a bound on) the generalization error as measured by (6). We refer the reader to Appendix F.3 for its proof, due to lack of space and the considerable needed technicality details. As it will become clearer from the below, an extended version of this theorem, stated in Appendix D.2, is particularly useful to study the Federated Support Vector Machines (FSVM) of the next section. For instance, an application of that result will yield a (more) explicit bound on the generalization error of FSVM in terms of the parameters \(K, R,\) and \(n\). Finally, a tail rate-distortion theoretic bound is derived in supplements. This result states loosely that having a good generalization performance with high probability requires the algorithm to be compressible not only under the true distribution \(P_S, W\), but also for all those distributions \(v_S, W\) that are in the vicinity of \(P_S, W\). ### 4 Federated Support Vector Machines (FSVM) In this section, we study the generalization behavior of Support Vector Machines (SVM) [Cortes & Vapnik, 1995; Vapnik, 2006] when optimized in the FL setup using SGD. SVM is a popular model, mainly used for binary classification, and is particularly powerful when used with high-dimensional kernels. For ease of exposition, however, we only developed results for linear SVMs that can be extended easily to any kernels. Consider a binary classification problem in which \(Z = X \times Y, X \subseteq \mathbb{R}^d\) with \(Y = \{-1, +1\}\). Using SVM for this problem consists of finding a suitable hyperplane, represented by \(w \in \mathbb{R}^d\), that properly separates the data according to their labels. For convenience, we only consider the case with zero bias. In this case, the label of an input \(x \in \mathbb{R}^d\) is estimated using the sign of \(\langle x, w \rangle\). The 0-1 loss \(\ell_0 : Z \times W \rightarrow \mathbb{R}^+\) is then evaluated as \(\ell_0(z, w) := \mathbbm{1}_{y \langle z, w \rangle > 0}\). For the specific cases of centralized (\(K = R = 1\)) and “one-shot” FL (\(K \geq 2, R = 1\)) settings [Grönlund et al., 2020; Seifidgaran et al., 2022a], it was observed that if the algorithm finds a hyperplane that separates the data with some margin then it generalizes well. Motivated by this, we consider the 0-1 margin loss function \(\ell_\theta : Z \times W \rightarrow \mathbb{R}^+\) for \(\theta \in \mathbb{R}^+\) as \(\ell_\theta(z, w) := \mathbbm{1}_{y \langle z, w \rangle > \theta}\). Similar to previous studies, we consider the empirical risk with respect to the margin loss, i.e., \(\hat{L}_\theta(s, \bar{w}^{(R)}) = \frac{1}{KR} \sum_{k \in [K], r \in [R]} \ell_\theta(z_k, s, \bar{w}^{(R)})\) which is equal to \(\frac{1}{KR} \sum_{k \in [K], r \in [R]} \hat{L}_\theta(s_k^{(r)}, \bar{w}^{(R)})\). The population risk is considered with respect to 0-1 loss function \(\ell_0\), i.e., \(L(\bar{w}^{(R)}) = \frac{1}{K} \sum_{k \in [K]} E_{Z_k \sim \mu_k} [\ell_0(Z_k, \bar{w}^{(R)})]\). The margin generalization error is then defined as \(\text{gen}_\theta(s, \bar{w}^{(R)}) = L(\bar{w}^{(R)}) - \hat{L}_\theta(s, \bar{w}^{(R)})\). For the statement of the result that will follow, we make three assumptions on SGD. Due to lack of space, these assumptions are attentively described and discussed in Appendix C and here we only state them informally. Consider an \((K, R, n, e, b)\)-FL-SGD model. Let \(W = B_d(1)\), where \(B_d(\nu)\) denotes the \(d\)-dimensional ball with origin center and radius \(\nu > 0\). • **Assumption 1:** There exists some \( q_{e,b} \in \mathbb{R}^+ \) such that for each client \( k \in [K] \) at each round \( r \in [2 : R] \), the local models \( w_k^{(r)} \) and \( w_k^{(r')} \) attained by some initializations \( \overline{w}^{(r-1)} \) and \( \overline{w}^{(r-1)} \), respectively, satisfy \[ \| w_k^{(r)} - w_k^{(r')} \| \leq q_{e,b} \| \overline{w}^{(r-1)} - \overline{w}^{(r-1)} \|. \] If \( q_{e,b} < 1 \), SGD is called to be **contractive**. • **Assumption 2:** There exists some \( \alpha \in \mathbb{R}^+ \) such that for each client \( k \in [K] \) at each round \( r \in [2 : R] \), the local models \( w_k^{(r)} \) and \( w_k^{(r')} \) attained by some initializations \( \overline{w}^{(r-1)} \) and \( \overline{w}^{(r-1)} := \overline{w}^{(r-1)} + \frac{w_k}{K} \), satisfy \[ \| w_k^{(r)} - (w_k^{(r')} + \frac{1}{K} D_{k,r} w_k) \| \leq \frac{\alpha}{K^2} \| w_k \|_2^2 \] for some matrix \( D_{k,r} \in \mathbb{R}^{d \times d} \), possibly dependent on \( S_k^{(r)} \) and \( \overline{w}^{(r-1)} \), whose spectral norm is bounded by \( q_{e,b} \). Intuitively, this is an assumption on the first-order approximation error of the local steps of SGD for one client at one round. • **Assumption 3:** The number of clients \( K \) is sufficiently large, i.e., \( K \geq f(q_{e,b}, \alpha, R) \) for some function \( f \) which is precised in Appendix C. Now, we are ready to state our bound on the generalization error of FSVM. **Theorem 4.** For FSVM optimized using \((K,R,n,e,b)\)-FL-SGD with \( W = B_d(1) \), \( X = B_d(B) \) and \( \theta \in \mathbb{R}^+ \), if Assumptions 1, 2, and 3 hold for some constants \( q_{e,b} \) and \( \alpha \), then, \[ E_{S,W \sim P_S,W} \left[ \text{gen}_g(S, W^{(R)}) \right] = O \left( \sqrt{\frac{B^2 \log(nK\sqrt{K}) \sum_{r \in [R]} L_r}{nK^2 \theta^2}} \right), \] where \[ L_r = \inf_{t \geq q^{(R-r)}} \left\{ t \log \max \left( \frac{K \theta}{Bt}, 2 \right) \right\} \leq q_{e,b}^{(R-r)} \log \max \left( \frac{K \theta}{B_{e,b}^{(R-r)}}, 2 \right). \] To the best of our knowledge, this result is the first of its kind for FSVM. We pause to state a few remarks that are in order, before discussing some key elements of its proof technique. First, the bound of Theorem 4 shows an explicit dependence on the number of communication rounds \( R \), in addition to the number of participating devices \( K \) and the size of individual datasets \( n \). In particular, the bound increases with \( R \) for fixed \((n,K)\). This suggests that the generalization power of FSVM may diminish by more frequent communication with the PS, as illustrated also numerically in Section 5. As a consequence, the population risk decreases less faster with \( R \) than the empirical risk. By taking into account the extra communication cost of larger \( R \), this means that during the training phase of such systems, one might choose deliberately to stop before convergence (at some appropriate round \( R^* \leq R \)), accounting for the fact that while interactions that are beyond \( R^* \) indeed generally contribute to diminishing the empirical risk further their net effect on the true measure of performance, which is the population risk, may be negligible. In the experiment section, we show such net effect when local models are ResNet-56 could be even negative. Second, for fixed \((n,R)\) the bound improves (i.e., gets smaller) with \( K \) with a factor of \( \sqrt{\log(K)/K} \). This behavior was previously observed in different contexts and under different assumptions in Sefidgaran et al. (2022a) and Barnes et al. (2022a), but in both works only for the “one-shot” FL setting, i.e., \( R = 1 \). In fact, it is easily seen that for the specific case \( R = 1 \), the bound recovers the result of Sefidgaran et al. (2022a) Theorem 5. **Proof outline:** Theorem 4 is proved in Appendix F.4 using an extended version of Theorem 3, i.e., Proposition 1 (in Appendix D.2), and by bounding the appearing rate-distortion terms therein. Intuitively, rate-distortion terms are equal to the number of bits needed to represent an (optimal) quantized model given certain quantization precision. To establish an appropriate upper bound that is independent of the dimension \( d \), we apply a quantization on a smaller \( d \)-independent dimension, using the Johnson-Lindenstrauss (JL) dimension-reduction transformation (Johnson & Lindenstrauss, 1984), which is inspired by Grönlund et al. (2020); Sefidgaran et al. (2022a); but with substantial extensions needed for the considered multiple-client multi-round setup. In particular, the main difficulty here is that while in the rate-distortion terms of Proposition 1 the mutual information term \( I_1 \) does not depend on \( W^{(R)} \), the distortion criterion \( I_2 \) does. Thus, one needs to study the propagation of the distortion, induced by quantizing a local model, until the last round. The distortion propagation differs depending on the round \( r \) where the local model is located. Hence, for a fixed propagated distortion, the amount of dimension reduction should depend on \( r \). More precisely, for each \( k \in [K] \) and \( r \in [R] \) we first map the local models \( W_k^{(r)} \) to a space with a smaller dimension of order \( m_r = O \left( \frac{B^2 \log(nK\sqrt{K}) \log(K/t)}{K^2 \theta^2 t^2} \right) \), for some \( t \geq q^{(R-r)} \), using JL transformation. As can be observed, we allow possibly different values of the dimensions for different values of \( r \). For a contractive SGD, \( m_r \) decreases with \( r \). Then, we define the quantized model subtly using this locally mapped model, as defined in equation (63). This quantization together with a newly defined loss function... equation (65) allow to study the propagation of such quantization distortion in two steps: first, we show that the immediate aggregated model has $K$ times smaller distortion level than that of client $k$. Next, we study the evolution of distortion along SGD iterations till the end of the round $R$, using induction on the rounds (see “Bounding (73)” in the proof) and by exploiting the properties of the JL transformation (see “Bounding (74)” & “Bounding (75)” in the proof). Intuitively, for a contractive SGD, the distortion decreases. This is why our $m_r$ decreases with $r$. For a non-contractive SGD, the opposite holds. 5 EXPERIMENTS FSVM. We start by verifying the increasing behavior of the generalization error of FSVM with respect to $R$, as suggested by Theorem 4. To do so, we consider a binary classification problem, with two extracted classes (digits 1 and 6) of MNIST dataset. Details and further experiments can be found in Appendix E. Fig. 2 shows the (estimated) expected generalization error $\mathbb{E}_{S,W}[\text{gen}(S, W^{(R)})]$ and the computed bound of Theorem 4 versus the number of communication rounds $R$, for fixed $n = 100$ and for $K \in \{10, 20, 50\}$. The expectation is estimated over $M = 100$ Monte-Carlo simulations. As can be observed, for any value of $K$, the generalization error increases with $R$, as predicted by the bound of Theorem 4. We emphasize that for Fig. 2, the total number of training data points and SGD iterations are kept fixed regardless of the number of running communication rounds $R$ (for every value of $R$ the devices perform $\tau = en/R$ local SGD iterations per-round); and, hence, the increase in the generalization error cannot be attributed to the classical “overfitting” phenomenon. Appendix E.4 also reports similar behavior in the context of heterogeneous data. Additional experiments on generalization of FL. To numerically verify the validity of our findings beyond FSVM, we conducted additional experiments using ResNet-56 as local models, and with CIFAR-10 dataset. Fig. 3a shows the generalization error of the global model as a function of $R$ while Fig. 3b shows the corresponding empirical and population risks. We provide average values over 5 runs and the shaded areas correspond to the standard deviation values. Experimental details are given in Appendix E. One can observe in Fig. 3a that the generalization error is increasing with $R$, showing that the behavior suggested by Theorem 4 and observed in the above FSVM experiments, remains valid in a different setup. The empirical risk in Fig. 3b is decreasing with $R$ as expected. More importantly, the population risk in Fig. 3b can be observed to have a global minimum for $R^* \approx 100$, while the maximum number of communication rounds is $R = 3600$, thus showing that one can minimize the “true” objective i.e., the population risk, while reducing communication from a large amount.\footnote{The question of estimating, prior to training, the optimal value $R^*$ of $R$ is an important one, with far-reaching consequences in practice, e.g., for the design of simultaneously communication-efficient and generalizable FL algorithms. This is left for future works.} REFERENCES Kwangjun Ahn, Chulhee Yun, and Suvrit Sra. Sgd with shuffling: optimal rates without component convexity and large epoch requirements. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 17526–17535. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/cb8acb1dc9821bf74e6ca9068032d623-Paper.pdf L. P. Barnes, A. Dytso, and H. V. Poor. Improved information theoretic generalization bounds for distributed and federated learning. In 2022 IEEE International Symposium on Information Theory (ISIT), pp. 1465–1470, 2022a. Leighton Pate Barnes, Alex Dytso, and Harold Vincent Poor. Improved information-theoretic generalization bounds for distributed, federated, and iterative learning. Entropy, 24(9):1178, 2022b. Olivier Catoni. A pac-bayesian approach to adaptive classification. preprint, 840, 2003. Romain Chor, Milad Sefidgaran, and Abdellatif Zaidi. More communication does not result in smaller generalization error in federated learning. In 2023 IEEE International Symposium on Information Theory (ISIT), pp. 48–53, 2023. Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning, 20(3):273–297, 1995. Aymeric Dieuleveut, Alain Durmus, and Francis Bach. Bridging the gap between constant step size stochastic gradient descent and Markov chains, 2018. Allan Grønlund, Lior Kamma, and Kasper Green Larsen. Near-tight margin-based generalization bounds for support vector machines. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 3779–3788. PMLR, 13–18 Jul 2020. Xinran Gu, Kaifeng Lyu, Longbo Huang, and Sanjeev Arora. Why (and when) does local SGD generalize better than SGID? In The Eleventh International Conference on Learning Representations, 2023. Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, and Viveck Cadambe. Local sgd with periodic averaging: Tighter analysis and adaptive synchronization. Advances in Neural Information Processing Systems, 32, 2019. Maxime Haddouche and Benjamin Guedj. Online pac-bayes learning. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 25725–25738. Curran Associates, Inc., 2022. William B Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert space 26. Contemporary mathematics, 26:28, 1984. Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D’Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Ö兹gür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning, 2019. URL https://arxiv.org/abs/1912.04977 Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2):1–210, 2021. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, pp. 5132–5143. PMLR, 2020.
O1lR4vSw5x
Do you have any insights on the learned vector fields? The presented experiments demonstrate that the proposed approach performs quite well for state estimation. Have you compared the learned vector fields to the ground truth ones?
Recursive Neural Ordinary Differential Equations for Partially Observed Systems Anonymous authors Paper under double-blind review Abstract Identifying spatiotemporal dynamics is a difficult task, especially in scenarios where latent states are partially observed and/or represent physical quantities. In this context, first-principle ordinary differential equation (ODE) systems are often designed to describe the system’s dynamics. In this work, we address the problem of learning parts of the spatiotemporal dynamics with neural networks when only partial information about the system’s state is available. Taking inspiration from recursive state estimation and Neural ODEs, we outline a general framework in which complex dynamics generated by differential equations with distinguishable states can be learned in a principled way. We demonstrate the performance of the proposed approach leveraging both numerical simulations and a real dataset extracted from an electro-mechanical positioning system. We show how the underlying equations fit into our formalism and demonstrate the improved performance of the proposed method when compared with standard baselines. 1 Introduction Ordinary differential equations (ODEs) are used to describe the state evolution of many complex physical systems in engineering, biology, and other fields of natural sciences. Traditionally, first-principle notions are leveraged in designing ODEs as a form to impose physical meaning and interpretability (Psichogios & Ungar [1992]) of latent states. A major issue, however, is the inherent complexity of real-world problems for which even carefully designed ODE systems cannot account for all aspects of the true underlying physical phenomenon (Karniadakis et al. [2021]). Moreover, we often require prediction of systems whose dynamics are not fully understood or are partially unknown (Imbiriba et al. [2022]). In this context, Neural ODEs (NODEs) (Chen et al. [2018]) emerged as a powerful tool for learning complex correlations directly from the data, where residual neural networks (NNs) are used to parameterize the hidden ODEs’ states. Extensions of NODE were developed to improve learning speed (Xia et al. [2021], Massaroli et al. [2021]) and learning longtime dependencies in irregularly sampled time series (Xia et al. [2021]). A major challenge in learning NODEs arises when latent states of interest contribute indirectly to the observations. This is the case when an unobserved state (in the sense that it is not measured) influences an observed state. In this scenario, NODE’s standard solutions, which are optimized using the adjoint method (Boltyanskiy et al. [1962]), are compromised. Furthermore, NODE systems may have infinitely many solutions since parameters and unobserved states are estimated jointly. As a consequence, even when the model is capable of fitting the data, unobserved states cannot be accurately inferred without incorporating some kind of prior information in the model (Demirkaya et al. [2021]). Recently, new hybrid strategies have focused on mixing first-principle models and NODEs to constrain the solution space and obtain meaningful estimations of missing states (Imbiriba et al. [2022], Demirkaya et al. [2021], Ghanem et al. [2021]). Despite the lack of a clear formalization, in these works the authors were imposing some kind of distinguishability among states by adding known parts of the dynamics, resulting in hybrid first-principle data-driven models. Nevertheless, these works focus on state estimation using data-driven components to improve or augment existing dynamics but fail to learn global models and do not scale for large parameterized models. In this paper, we propose a sequential optimization approach that at each time step solves an alternating optimization problem for learning system dynamics under partially observed states, when states... are distinguishable. The approach focuses on learning unknown dynamics from data where the state related to the unknown dynamics is unobserved. Since the dynamics is unknown, we assume it is described by parametric models such as NNs. The proposed solution leverages the relationship between many recursive state-space estimation procedures and Newton’s method (Humpherys et al., 2012) to develop an efficient recursive NODE learning approach capable of sequentially learning states and model parameters. The benefit of the sequential strategy is twofold: (1) reduce the need for accurate initial conditions during training; (2) avoids simultaneous estimation of all states, making second-order optimization methods feasible. Furthermore, the proposed approach exploits the distinguishable property of states by designing an alternating optimization strategy with respect to states and parameters. The result is an interconnected sequential optimization procedure, where at each step model parameters and data are used to estimate latent states, and corrected latent states are used to update the model parameters in the current optimization step. Such alternating optimization approach improves the optimization of system parameters since it estimates unobserved hidden states and uses them in learning system parameters. In the case of RNODE, it also prevents vanishing gradients. Moreover, we define distinguishable latent variables and test the proposed Recursive NODE (RNODE) in hybrid scenarios where NNs replace parts of the ODE systems such that the distinguishability of latent variables is kept. Finally, as a side effect of the recursive paradigm adopted the proposed strategy can assimilate data and estimate initial conditions by leveraging its sequential state estimation framework over past data. 2 RELATED WORK 2.1 PARTIAL OBSERVATION In the context of data-driven ODE designs, most learning frameworks assume that all states are observed in the sense that they are directly measured. This assumption does not reflect many real-world scenarios where a subset of the states are unobserved. GP-SSM is a well-established approach used for dynamic systems identification (McHutchon et al., 2015; Falongo et al., 2019). GP-SSM can be adapted by introducing a recognition model that maps outputs to latent states to solve the problem of partial measurements (Eleftheriadis et al., 2017). Nevertheless, these methods do not scale well with large datasets and are limited to small trajectories (Doerr et al., 2018). Indeed, (Doerr et al., 2018) minimizes this problem by using stochastic gradient ELBO optimization on minibatches. However, GP-SSM-based methods avoid learning the vector field describing the latent states and instead directly learn a mapping from a history of past inputs and observations to the next observation. Similar approaches to the recognition models have been used for Bayesian extensions of NODEs, where the NODE describes the dynamics of the latent state while the distribution of the initial latent variable given the observations and vice versa are approximated by encoder and decoder networks (Yildiz et al., 2019; Norcliffe et al., 2021). The encoder network, which links observations to latent state by a deterministic mapping or by approximating the conditional distribution, can also be a Recurrent Neural Network (RNN) (Rubanova et al., 2019; Kim et al., 2021; De Brouwer et al., 2019), or an autoencoder (Bakarji et al., 2023). Despite focusing on mapping observations to latent states with neural networks and autoencoders, these works were not demonstrated to learn parameterized models under partial observations. Moreover, this parameterized line of work of mapping observation to latent states suffers from undistinguishability problem since several latent inputs could lead to the same observation. Recently, sparse approaches such as (Bakarji et al., 2022) merged encoder networks to identify a parsimonious transformation of the hidden dynamics of partially observed latent states. Moreover, Nonlinear Observers and recognition models were combined with NODEs to learn dynamic model parameters from partial observations while enforcing physical knowledge in the latent space (Buisson-Fenet et al., 2022). Differently from the aforementioned methods, in this work, we propose a recursive alternating approach that uses alternating Newton updates to optimize a quadratic cost function with respect to states and model parameters. Furthermore, the proposed strategy provides a systematic way to estimate initial conditions from historical data. 2.2 SECOND ORDER NEWTON METHOD Despite the efficiency and popularity of many stochastic gradient descent methods (Robbins & Monro, 1951; Duchi et al., 2011; Hinton et al., 2012; Kingma & Ba, 2014) for optimizing NNs, great efforts have been devoted to exploiting second-order Newton methods where Hessian information is used, providing faster convergence (Martens & Grosse, 2015; Botev et al., 2017; Gower et al., 2016; Mokhtari & Ribeiro, 2014). When training neural networks, computing the inverse of the Hessian matrix can be extremely expensive (Goldfarb et al., 2020) or even intractable. To mitigate this issue, Quasi-Newton methods have been proposed to approximate the Hessian pre-conditioner matrix such as Shampoo algorithm (Gupta et al., 2018), which was extended in (Anil et al., 2020) to simplify blocks of the Hessian, and in (Gupta et al., 2018) to be used in variational inference second-order approaches (Peirson et al., 2022). Similarly, works in (Goldfarb et al., 2020; Byrd et al., 2016) focused on developing stochastic quasi-Newton algorithms for problems with large amounts of data. It was shown that recursive the extended Kalman filter can be viewed as Gauss-Newton method (Bell, 1994; Bertsekas, 1996). Moreover, Newton’s method was used to derive recursive estimators for prediction and smoothing (Humpherys et al., 2012). In this paper, we develop a recursive Newton method that mitigates the problem of partial observations of latent states. 3 MODEL AND BACKGROUND In this section, we describe our modeling assumptions, discuss the distinguishability of latent states, and present the time evolution of the resulting generative model. 3.1 MODEL In this work, we focus on stochastic differential equations (SDE) as defined in (Øksendal & Øksendal, 2003) to describe the evolution of system parameters \( \theta(t) \in \mathcal{P} \subset \mathbb{R}^{d_\theta} \), latent states \( x(t) \in \mathcal{X} \subset \mathbb{R}^{d_x} \), and observations (or measurements) \( y(t) \in \mathcal{Y} \subset \mathbb{R}^{d_y} \). The joint process can be described as: \[ \begin{align*} \dot{\theta}(t) &= g(\theta(t)) + \nu(t) \\ \dot{x}(t) &= f(x(t), \theta(t), u(t)) + \epsilon(t) \\ y(t) &= h(x(t)) + \zeta(t) \end{align*} \] where \( \nu(t), \epsilon(t) \) and \( \zeta(t) \) are Wiener processes. \( u(t) \in \mathcal{U} \subset \mathbb{R}^{d_u} \) is a vector of external inputs, and the functions \( g : \mathcal{P} \rightarrow \mathcal{P}, f : \mathcal{X} \times \mathcal{P} \times \mathcal{U}, \) and \( h : \mathcal{X} \rightarrow \mathcal{Y} \) describe the system parameters, latent and observation processes, respectively. To describe the evolution of system parameters \( \theta(t) \) and latent states \( x(t) \) we consider the process in equation 1 to be first-order Markov process evolving over time \( t \). The partial observation problem: Ideally, states \( x(t) \) would be directly observed, and thus appear as an element in \( y(t) \). In practice, some of these states could influence \( y(t) \) only indirectly by acting on other measurable states. That is when classical training fails. In this work, we are interested in learning the unknown dynamics governing unobserved states. Note that this scenario poses further challenges over the estimation process since the recovery of latent states can be compromised. 3.2 DISTINGUISHABILITY OF NONLINEAR SYSTEMS The task of recovering latent states \( x(t) \) from a sequence of observations and inputs \( \mathcal{D}_N \triangleq \{u(0), y(0), \ldots, u(N - 1), y(N - 1)\} \) rests on our ability to distinguish two observations \( h(x(t_a)) \) and \( h(x(t_b)) \) from one another. **Definition 3.1** We say that a pair of latent variables \( x(t_a) \) and \( x(t_b) \) are distinguishable with respect to a control sequence \( u(t) \in \mathcal{U} \subset \mathbb{R}^{d_u} \) if \[ h(x(t_a)) \neq h(x(t_b)) \quad \forall x(t_a) \neq x(t_b) \] Otherwise, we say that the pair is indistinguishable with respect to \( u(t) \). If under a control input \( u(t), h(x(t_a)) = h(x(t_b)) \), then the state estimator cannot identify the true state \( x \) since it can assume the true state to be \( x(t_a) \) when it’s \( x(t_b) \) and vice versa. Since our procedure relies on finding latent states \( x(t) \) given a control input \( u(t) \) and observation \( y(t) \) and uses it to identify the ODE system, by estimating the model parameters \( \theta(t) \), estimating the wrong state \( x(t) \) will result in finding the wrong model parameters, hence training will fail. A way to impose state distinguishability is to incorporate prior knowledge regarding the relationship of states focusing on achieving the properties stated in Definition 3.1. 3.3 Generative model In the continuous model presented in (1), a continuous-time description for the latent processes is assumed even though the observations are recorded at discrete time points. The time evolution of the states \( x(t) \) can therefore be expressed as time integration of (1) using an off-the-shelf ODE solver: \[ x(t_i) = x(t_{i-1}) + \int_{t_{i-1}}^{t_i} f(x(t), u(t), \theta(t)) \, dt + \int_{t_{i-1}}^{t_i} \frac{\partial e(t)}{\partial t} \, dt \\ = \text{ODESolve}(f, x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1}), t_{i-1}, t_i) + e(t) \] we define \[ f_o(x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1})) = \text{ODESolve}(f, x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1}), t_{i-1}, t_i) + e(t) \] and \[ g_o(\theta(t_{i-1})) = \text{ODESolve}(g, \theta(t_{i-1}), t_{i-1}, t_i), \theta(t_{i-1}) + \nu(t). \] Based on the continuous model presented in (1) we present the time evolution of the latent states by the following generative model: \[ \theta(t_i) = g_o(\theta(t_{i-1})) + \nu(t) \\ x(t_i) = f_o(x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1})) + e(t) \\ y(t_i) = h(x(t_i)) + \zeta(t). \] ![Figure 1: The generative model (left panel), and one step of RNODE (right panel).](image) 4 Method Recursive Neural Ordinary Differential Equations (RNODE) finds the model parameters \( \theta(t) \) and latent states \( x(t) \) given a dataset \( D \triangleq \{u(t_0), y(t_0), \ldots, u(t_{N-1}), y(t_{N-1})\} \) of discrete observations and control inputs when \( x(t) \) is partially observed. Inspired by previous work describing the link between second-order Newton’s method and the Kalman filter (Humpherys et al., 2012), the cost function \( L \) is updated and solved sequentially to find latent states \( x(t) \) and model parameters \( \theta(t) \) in one unified framework. RNODE assumes model distinguishability which implies that latent states \( x(t) \) are recoverable from observations \( y(t) \). In this context, we break the optimization steps into two concerning optimization with respect to \( x(t) \) and \( \theta(t) \). 4.1 Sequential Newton Derivation We denote by \( \Theta_N = [\theta(t_0), \ldots, \theta(t_N)] \) and \( X_N = [x(t_0), \ldots, x(t_N)] \) to be the set of latent states sampled at \( t_0, t_1, \ldots, t_N \). To train the model, we optimize \( (\Theta_N, X_N) \) to minimize a quadratic cost function starting from initial \( \{x(t_0), \theta(t_0)\} \) using a collection of combined observation and input sequences \( D \) where the cost function is defined as: \[ L_N(\Theta_N, X_N) = \frac{1}{2} \sum_{i=1}^{N} \|x(t_i) - f_o(x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1}))\|_{Q_x^{-1}}^2 \\ + \|y(t_i) - h(x(t_i))\|_{R_y^{-1}}^2 + \|\theta(t_i) - g_o(\theta(t_{i-1}))\|_{Q_\theta^{-1}}^2. \] where $Q_x$, $R_y$ and $Q_\theta$ are known positive definite matrices, and $\|a - b\|_A^{-1} = (a - b)^T A^{-1}(a - b)$. As the Hessian’s inverse is in general intractable, finding optimal solution $(\Theta_N^*, X_N^*)$ using the second order Newton method over the whole data set of size $N$ is unfeasible. For this reason, we resort to a sequential strategy by introducing a modified quadratic function $L_i(\Theta_i, X_i)$. Let us re-write the cost function at time $t_i$ as: $$L_i(\Theta_i, X_i) = L_{i-1}(\Theta_{i-1}, X_{i-1}) + \frac{1}{2} \|x(t_i) - f_o(x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1}))\|^2_{Q_x^{-1}}$$ $$+ \frac{1}{2} \|y(t_i) - h(x(t_i))\|^2_{R_y^{-1}} + \frac{1}{2} \|\theta(t_i) - g_o(\theta(t_{i-1}))\|^2_{Q_\theta^{-1}}$$ (8) where $L_{i-1}(\Theta_{i-1}, X_{i-1})$ and $L_i(\Theta_i, X_i)$ are the cost functions at times $t_{i-1}$ and $t_i$, respectively; $\Theta_i = [\theta(t_0), \ldots, \theta(t_i)]$ and $X_i = [x(t_0), \ldots, x(t_i)]$. In the sequential optimization paradigm, $\Theta_{i-1}$ and $X_{i-1}$ are assumed known and at the $i$-th optimization step is performed only with respect to $\{\theta(t_i), x(t_i)\}$. When $\{\theta(t_i), x(t_i)\}$ are determined jointly such as in (Humpherys et al., 2012), the optimization process will suffer from vanishing gradients under partial observations. However, if $x(t_i)$ is distinguishable, we can circumvent the vanishing gradient problem by first optimizing with respect to $x(t_i)$ and then $\theta(t_i)$. This will allow us to circumvent the partial observability problem and enable the use of an estimate of the unobserved state in training. To do so, we break the optimization function (8) into four alternating optimization procedures aiming at finding $\hat{x}(t_i)$ and then finding $\hat{\theta}(t_i)$ that minimizes (8) given $\hat{x}(t_i)$. Let us begin by defining two intermediate optimization functions $L_{i|i-1}^x$ and $L_{i|i-1}^\theta$ in (9) and (10) respectively as follows: $$L_{i|i-1}^x(\Theta_i, X_i) = L_{i-1}(\Theta_{i-1}, X_{i-1}) + \frac{1}{2} \|x(t_i) - f_o(x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1}))\|^2_{Q_x^{-1}}$$ $$+ \frac{1}{2} \|\theta(t_i) - g_o(\theta(t_{i-1}))\|^2_{Q_\theta^{-1}}$$ (9) and $$L_{i|i-1}^\theta(\Theta_i, X_i) = L_{i-1}(\Theta_{i-1}, X_{i-1}) + \frac{1}{2} \|\theta(t_i) - g_o(\theta(t_{i-1}))\|^2_{Q_\theta^{-1}}.$$ (10) We proceed by optimizing (9) for $x(t_i)$ and (10) for $\theta(t_i)$, yielding the respective solutions below: $$\hat{\theta}(t_i|t_{i-1}) = g_o(\hat{\theta}(t_{i-1}))$$ $$\hat{x}(t_i|t_{i-1}) = f_o(\hat{x}(t_{i-1}), \hat{\theta}(t_{i-1})).$$ (11) Next, we define the two optimization functions responsible for the update steps for states and parameters. Specifically, we define $L_i^x$ as: $$L_i^x(\Theta_i, X_i) = L_{i|i-1}^x(\Theta_i, X_i) + \|y(t_i) - h(x(t_i))\|^2_{R_y^{-1}}$$ (12) to be optimized with respect to $x(t_i)$ by minimizing $L_i^x$ given intermediate values of equation (11) where: $$\hat{x}(t_i) = \hat{x}(t_i|t_{i-1}) - \left[(\nabla^2 L_i^x(\Theta_i, X_i))^{-1}\right]_{i,:} \nabla L_i^x(\Theta_i, X_i)$$ (13) The solution to the problem above is given by given by (16). Equivalently, we define the update optimization function $L_i^\theta$ as: $$L_i^\theta(\Theta_i, X_i) = L_{i|i-1}^\theta(\Theta_i, X_{i-1}) + \|x(t_i) - f_o(x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1}))\|^2_{Q_x^{-1}}$$ $$+ \|y(t_i) - h(x(t_i))\|^2_{R_y^{-1}}$$ (14) to be optimized with respect to $\theta(t_i)$ by minimizing $L_i^\theta$ given intermediate values of equation (11) and (16) as follows: $$\hat{\theta}(t_i) = \hat{\theta}(t_i|t_{i-1}) - \left[(\nabla^2 L_i^\theta(\Theta_i, X_{i-1}))^{-1}\right]_{i,:} \nabla L_i^\theta(\Theta_i, X_{i-1})$$ (15) The resulting optimal variable $\hat{\theta}(t_i)$ is given by (17). The procedure is repeated until $t_i = t_N$. We present our main result in the following theorem: Theorem 4.1 Given \( \hat{\theta}(t_{i-1}) \in \hat{\Theta}_{i-1} \) and \( \hat{x}(t_{i-1}) \in \hat{X}_{i-1} \), and known \( P_{\theta_{i-1}} \in R^{d_\theta \times d_\theta} \) and \( P_{x_{i-1}} \in R^{d_x \times d_x} \), the recursive equations for computing \( \hat{x}(t_i) \) and \( \hat{\theta}(t_i) \) that minimize \( \mathcal{L}_t \) are given by the following: \[ \hat{x}(t_i) = f_o(\hat{x}(t_{i-1}), \hat{\theta}(t_{i-1})) - P_{x_i} H_i^T (H_i P_{x_i} H_i^T + R_y)^{-1} \left[ h(f_o(\hat{x}(t_{i-1}), \hat{\theta}(t_{i-1}))) - y(t_i) \right] \] (16) \[ \hat{\theta}(t_i) = g_o(\hat{\theta}(t_{i-1})) - G_{\theta_{i-1}} P_{\theta_i} F_{\theta_{i-1}}^T \left[ f_o(\hat{x}(t_{i-1}), \hat{\theta}(t_{i-1})) - \hat{x}(t_i) \right] \] (17) with \( P_{\theta_i}, P_{x_i} \) being intermediate matrices and \( P_{\theta_i} \) and \( P_{x_i} \) being the lower right blocks of \( (\nabla^2 \mathcal{L}_t)^{-1} \) and \( (\nabla^2 \mathcal{L}_t^o)^{-1} \) respectively: \[ P_{\theta_i} = P_{\theta_{i-1}} - P_{\theta_{i-1}} F_{\theta_{i-1}}^T \left( Q_x + F_{\theta_{i-1}} P_{\theta_{i-1}} F_{\theta_{i-1}}^T \right) F_{\theta_{i-1}} P_{\theta_{i-1}} \] \[ P_{x_i} = F_{x_{i-1}} P_{x_{i-1}} F_{x_{i-1}} + Q_x \] \[ P_{x_i} = P_{x_i} [I + H_i (R_y - H_i P_{x_i} H_i^T) H_i P_{x_i}] \] \[ P_{\theta_i} = Q_{\theta_i} + G_{\theta_{i-1}} P_{\theta_i} G_{\theta_{i-1}} \] with \( H_i, F_{x_{i-1}}, G_{\theta_{i-1}}, \) and \( F_{\theta_{i-1}} \) being the jacobians of the vector fields \( h, f_o \) and \( g_o \) at \( \hat{x}(t_i|t_{i-1}), \hat{x}(t_{i-1}) \) and \( \hat{\theta}(t_{i-1}) \): \[ H_i = \frac{\partial h(\hat{x}(t_i|t_{i-1}))}{\partial \hat{x}(t_i|t_{i-1})}, \quad F_{x_{i-1}} = \frac{\partial f_o(\hat{x}(t_{i-1}), \hat{\theta}(t_{i-1}))}{\partial \hat{x}(t_{i-1})}, \quad F_{\theta_{i-1}} = \frac{\partial f_o(\hat{x}(t_{i-1}), \hat{\theta}(t_{i-1}))}{\partial \hat{\theta}(t_{i-1})} \] and \( G_{\theta_{i-1}} = \frac{\partial g_o(\hat{\theta}(t_{i-1}))}{\partial \hat{\theta}(t_{i-1})} \). The proof of Theorem 4.1 is provided in Appendix A. As a consequence of Theorem 4.1, \( \hat{x}(t_i) \) is computed according to (16) using \( \hat{\theta}(t_{i-1}) \). \( \hat{\theta}(t_i) \) is computed afterwards according to (17) using \( \hat{x}(t_i) \) that was previously found in (16). This alternating procedure between \( x(t_i) \) and \( \theta(t_i) \) is explained in the right panel of Figure 1, which depicts the four alternate optimization steps performed for each iteration \( t_i \). The computational complexity of RNODE is detailed in Appendix D. An epoch of the RNODE has a complexity of \( O(N(d_\theta^3 + 2d_\theta^2 d_x + 2d_\theta d_x^2)) \). Under the assumption that \( d_\theta \gg d_x \) the complexity becomes \( O(N(2d_\theta^2 d_x + 2d_\theta d_x^2)) \). During testing, however, the complexity becomes \( O(d_\theta) \) per step if integrating the learned mean vector field. 4.2 Obtaining initial condition from historical data Obtaining initial conditions \( x(t_0) \) during test time is often challenging. However, the proposed recursive framework can easily provide an estimate of the initial condition if historical data \( \mathcal{D}_H \triangleq \{u(t_{-N}), y(t_{-N}), \ldots, u(t_0), y(t_0)\} \) is available as described in equation 58 in Appendix C. Thus, given the model \( \theta^* \) we can exploit the update equation for the states, see (17), to provide \( \hat{x}(t_0) \). 5 Experiments The performance of RNODE is assessed in comparison to state-of-the-art model learning methods on several challenging non-linear simulations and real-world datasets. We employed five different dynamical models to demonstrate the effectiveness of the proposed approach. For each dynamical model, we assumed that we don’t have parts of the governing dynamics available, and replaced them with a neural network. In all of our experiments, we assume the latent process to be constant, that is \( g(\theta(t)) = 0 \), since optimal \( \theta(t)^* \) should be constant. Euler integrator is used as the ODE solver for efficiency and fast computation speed. Since the proposed mechanism rests on determining unobserved latent states from observed measurements, successful learning of the model relies on the distinguishability of latent states as defined in Definition 3.1. To ensure that, we assume partial knowledge of system ODE’s. As benchmark methods, we compared RNODE with three other well-established techniques for dynamical machine learning, namely NODE (Chen et al., 2018), RM (Buisson-Fenet et al., 2022). and PR-SSM (Doerr et al., 2018). Currently, no code is available for the model learning frameworks presented in (Eleftheriadis et al., 2017). Moreover, the available code related to the works in (McHutchon et al., 2015; Ialongo et al., 2019) could be modified to account for the partial observation scenario. However, these algorithms become computationally unfeasible for medium and large datasets (Doerr et al., 2018). For that reason, we were not able to benchmark against these approaches. We emphasize that modifying the above-mentioned methods to either account for the ODE structure or make them computationally tractable is out of the scope of this paper. This also applies to the PRSSM method. Nevertheless, for the sake of providing comparative results, we still include results using PR-SSM which is computationally more efficient than other Gaussian process-based models but does not account for the ODE structure. The benchmark results are summarized in Table 1, which represents normalized Root Mean Square Error (nRMSE) values for each model and method. In Figs. 2–5, we compare RM, PR-SSM, and our proposed method. All results were obtained with learned mean vector field integrated over time. Each subfigure represents the dynamics of a single state and contains ODE solutions for each method. We computed nRMSE using \( \text{nRMSE} = \frac{\sqrt{\sum_{i=1}^{n}(x(t_i) - \hat{x}(t_i))^2}}{\max(x(t)) - \min(x(t))} \), where \( \hat{x}(t_i) \) and \( x(t_i) \) are the estimated and true states at time \( t_i \), respectively, and \( n \) is the number of data points. Table 1: Comparison of nRMSE values for different dynamical models and methods. | Methods | Neuron model | Yeast Glycolysis | Cart-pole | Harmonic Oscillator | EMPS | |------------------|--------------|------------------|-----------|---------------------|------| | RM (Buisson-Fenet et al., 2022) | 2.39 \cdot 10^{-1} | 6.30 \cdot 10^{-1} | 1.06 \cdot 10^0 | 2.36 \cdot 10^{-2} | 6.20 \cdot 10^{-1} | | PR-SSM (Doerr et al., 2018) | 4.05 \cdot 10^{-1} | 1.59 \cdot 10^0 | 1.52 \cdot 10^0 | 1.21 \cdot 10^0 | 4.05 \cdot 10^{-1} | | NODE (Chen et al., 2018) | 7.03 \cdot 10^1 | 3.74 \cdot 10^{-1} | 2.84 \cdot 10^{-1} | 4.65 \cdot 10^{-1} | 1.65 \cdot 10^0 | | RNODE (Proposed) | 1.54 \cdot 10^{-1} | 3.39 \cdot 10^{-2} | 9.41 \cdot 10^{-3} | 5.08 \cdot 10^{-3} | 9.50 \cdot 10^{-2} | 5.1 Hodgkin-Huxley Neuron Model The renowned Hodgkin-Huxley Neuron Model (HH) (Hodgkin & Huxley, 1952) is an ODE system that describes the membrane dynamics of action potentials in neurons, which are electrical signals used by neurons to communicate with each other. The model has four states: \( V_m \) is the membrane potential, \( n_{gate} \), \( m_{gate} \), and \( h_{gate} \) are gating variables controlling the membrane’s ionic permeability. The equations governing the ODE system are provided in Eqs. 46–49 of the Appendix B.2. We train our recursive model with the assumption that Eq. 49 governing dynamics of \( h_{gate} \) is unknown and its corresponding state is not observed, i.e., \( y(t_i) = (V_m(t_i), n_{gate}(t_i), m_{gate}(t_i)) \). We replace the dynamics describing \( h_{gate}(t) \) by a neural network consisting of three layers. The first layer is a 20 units layer followed by an Exponential Linear Unit (ELU) activation function, second layer is also a 20 unit layer followed by a tanh activation function. The last layer consists of 10 units with a sigmoid activation function. We generate our dataset by applying a constant control input \( u(t_i) \) to the HH model described in 46–49 for 50000 time steps with \( dt = 10^{-3}s \) and by collecting measurements and inputs \( D \triangleq \{u(t_0), y(t_0), \ldots, u(t_{N-1}), y(t_{N-1})\} \). We train our model on \( D \) with \( P_{x_0} = 10^{-2}I_{d_x}, P_{\theta_0} = 10^2I_{d_\theta}, R_y = 10^{-10}I_{d_y}, Q_x = 10^{-5}I_{d_x} \) and \( Q_\theta = 10^{-2}I_{d_\theta} \). At the beginning of each epoch, we solve the problem 58 of the Appendix C to get the initial condition. Final optimal parameters \( \hat{\theta}(t_N) \) and initial condition \( \hat{x}(t_0) \) are saved and collected at the end of training. Fig. 2 depicts the dynamics of the system $\hat{\theta}(t_N)$ generated according to the generative model described in Eq (3) starting from initial condition $\hat{x}(t_0)$. The lower right panel demonstrates the superiority of the proposed model at learning $h_{gate}$. To demonstrate the robustness of RNODE to different dynamical regimes and showcase its capability of estimating accurate initial conditions, we perform an additional experiment. For this, we generate data $D_T$ with $N = 50,000$ samples using the HH model with different initial conditions from the ones used during training. From this data, we reserve the first 100 samples for learning the initial condition before performing integration for the remaining 49,900 samples. Then, using the learned model $\hat{\theta}(t_N)$ and the procedure described in Section 4.2, we obtained the initial condition $\hat{x}(t_{100})$ and obtained the RNODE solution. Figure 3 shows the evolution of the RNODE attesting to its capability of both estimating accurate initial conditions and generalization to other dynamical regimes. ### 5.2 Cart-Pole System We demonstrate the efficacy of the proposed RNODE in learning the non-linear dynamics of the cart-pole system. The system is composed of a cart running on a track, with a freely swinging pendulum attached to it. The state of the system consists of the cart’s position and velocity, and the pendulum’s angle and angular velocity, while a control input $u$ can be applied to the cart. We used the LQR [Prasad et al., 2011] algorithm to learn a feedback controller that swings the pendulum and balances it in the inverted position in the middle of the track. The equations governing the ODE system are provided in Eqs (54)-(57) of the Appendix B.5. We train our recursive model with the assumption that we don’t know the equation corresponding to $\dot{\phi}$ governing dynamics of the cart-pole’s angular rate. Therefore, we replace Eqs. (55) and (57) with a two-layer neural network with tanh activation function on each layer. We don’t measure cart-pole’s velocity $\dot{z}(t_i)$ and angular rate $\dot{\phi}(t_i)$, i.e., $y(t_i) = [z(t_i), \phi(t_i)]$. We generate our dataset by applying LQR balancing controller to the cart-pole described in Eqs (54)-(57) for 5000 time steps with $dt = 10^{-3}s$ and by collecting measurements and inputs $D \triangleq \{u(t_0), y(t_0), \ldots, u(t_{N-1}), y(t_{N-1})\}$. We train our model on $D$ with $P_{x_0} = 10^{-2}I_{d_x}, P_{\theta_0} = 10^2I_{d_\theta}, R_y = 10^{-10}I_{d_y}, Q_x = 10^{-5}I_{d_x}$ and $Q_\theta = 10^{-2}I_{d_\theta}$. At the beginning of each epoch, we solve problem (58) of the Appendix C to get the initial condition. Final optimal parameters $\hat{\theta}(t_N)$ and initial condition $\hat{x}(t_0)$ are saved and collected at the end of training. We qualitatively assess the performance of our model by feeding the control sequence stored in $D$ and parameters $\hat{\theta}(t_N)$ to the RNODE according to the generative model described in Eq (3) starting from initial condition $\hat{x}(t_0)$. In Figure 4, we demonstrate the ability of the proposed RNODE to learn the underlying dynamics of the system partially observed data compared to RM and PR-SSM methods. Table I show that RNODE clearly outperforms the competing algorithms with nRMSE value that is 99.3%, 99.1% and 97.67% smaller than the nRMSEs obtained by PR-SMM, RM, and NODE respectively. Analyzing the evolution of the latent states depicted in Figure 4, we notice that RNODE provides state trajectories that match the ground truth (GT) while the other two methods fail to capture the true trajectory. In fact, PR-SSM presents acceptable trajectories of $\dot{z}$ and $\ddot{z}$ but fails to learn $\phi$ and $\ddot{\phi}$ trajectories. On the other hand RM presents acceptable trajectories of $\phi$ and $\ddot{\phi}$ but fails to learn $z$ and $\dot{z}$ trajectories. Moreover, the NODE successfully learns the observed $\phi$ and $z$ trajectories but fails to learn correct trajectories of the unobserved states $\ddot{\phi}$ and $\dot{z}$. Both RM and PR-SSM estimated state trajectories are much more inaccurate than the one provided by RNODE. The main reason for this inaccuracy is that trajectory generation is run using a pre-computing control sequence $U \triangleq \{u(t_0), \ldots, u(t_{N-1})\} \in D$, hence any inaccuracy in the learned dynamics would cause the trajectories to go way off the ground truth (GT) due to the nonlinearity of the cart-pole system. This shows the challenging nature of the problem and the proposed approach’s efficiency in learning challenging nonlinear dynamics. In this context, RNODE’s superior performance is due to its alternating optimization approach since estimates of unobserved states become available when optimizing $\theta$. This feature is unavailable in the competing methods. 5.3 ELECTRO-MECHANICAL POSITIONING SYSTEM Here we evaluate the proposed RNODE on real data from an electro-mechanical positioning system described in (Janot et al., 2019). The training Dataset consists of system’s of position, velocity, and control inputs used. The dataset consists of 24801 data points for each state and control input with $dt = 10^{-3}s$. In a similar fashion to the HH and cart-pole systems, we train the RNODE using position and control inputs. We replace the velocity’s dynamics by a neural network of two layers of 50 and 20 units respectively followed by a tanh activation function. Table 1 show that RNODE clearly outperforms the competing algorithms with nRMSE value that is 99.9%, 84.6% and 94.2% smaller than the nRMSEs obtained by PR-SMM, RM, and NODE, respectively. Analyzing the evolution of the latent states depicted in Figure 5, we notice that RNODE provides state trajectories that match the ground truth (GT) while PR-SSM and RM collapse catastrophically. The NODE learns the period of the hidden $\dot{q}_m$ signal but fails the capture its amplitude. The stiffness of $\dot{q}_m$ dynamics plays a role in these results since the sudden jumps shown in Figure 5 are hard to capture. This again demonstrates the robustness of the proposed approach. 6 CONCLUSIONS We proposed a novel recursive learning mechanism for NODE’s to address the challenging task of learning the complex dynamics of ODE systems with partial observations. Specifically, we constructed an alternating optimization procedure using Newton’s method that sequentially finds optimal system latent states and model parameters. The resulting framework, RNODE, allows for efficient learning of missing ODEs when latent states are distinguishable. Different from other competing methods, RNODE optimizes model parameters using latent states instead of observed data, leading to superior performance under the partial observation setting. Experiments performed with three complex synthetic systems and one with real data provide evidence that RNODE is capable of providing adequate solutions in very challenging scenarios, attesting RNODE’s superior performance when compared with other state-of-the-art strategies. REFERENCES Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, and Yoram Singer. Scalable second order optimization for deep learning. *arXiv preprint arXiv:2002.09018*, 2020. Joseph Bakarji, Kathleen Champion, J Nathan Kutz, and Steven L Brunton. Discovering governing equations from partial measurements with deep delay autoencoders. *arXiv preprint arXiv:2201.05136*, 2022. Joseph Bakarji, Kathleen Champion, J Nathan Kutz, and Steven L Brunton. Discovering governing equations from partial measurements with deep delay autoencoders. *Proceedings of the Royal Society A*, 479(2276):20230422, 2023. Bradley M Bell. The iterated kalman smoother as a gauss–newton method. *SIAM Journal on Optimization*, 4(3):626–636, 1994. Dimitri P Bertsekas. Incremental least squares methods and the extended kalman filter. *SIAM Journal on Optimization*, 6(3):807–822, 1996. VG Boltyanskiy, Revaz V Gamkrelidze, YEF Mishchenko, and LS Pontryagin. Mathematical theory of optimal processes. 1962. Aleksandar Botev, Hippolyt Ritter, and David Barber. Practical gauss-newton optimisation for deep learning. In *International Conference on Machine Learning*, pp. 557–565. PMLR, 2017. Mona Buisson-Fenet, Valery Morgenthaler, Sebastian Trimpe, and Florent Di Meglio. Recognition models to learn dynamics from partial observations with neural odes. *Transactions on Machine Learning Research*, 2022. Richard H Byrd, Samantha L Hansen, Jorge Nocedal, and Yoram Singer. A stochastic quasi-newton method for large-scale optimization. *SIAM Journal on Optimization*, 26(2):1008–1031, 2016. Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. *Advances in neural information processing systems*, 31, 2018. Edward De Brouwer, Jaak Simm, Adam Arany, and Yves Moreau. Gru-ode-bayes: Continuous modeling of sporadically-observed time series. *Advances in neural information processing systems*, 32, 2019. Ahmet Demirkaya, Tales Imbiriba, Kyle Lockwood, Sumientra Rampersad, Elie Alhajjar, Giovanna Guidoboni, Zachary Danziger, and Deniz Erdogmus. Cubature Kalman filter based training of hybrid differential equation recurrent neural network physiological dynamic models. *43rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society*, 2021. Andreas Doerr, Christian Daniel, Martin Schiegg, Nguyen-Tuong Duy, Stefan Schaal, Marc Tous-saint, and Trimpe Sebastian. Probabilistic recurrent state-space models. In *International conference on machine learning*, pp. 1280–1289. PMLR, 2018. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. *Journal of machine learning research*, 12(7), 2011. Stefanos Eleftheriadis, Tom Nicholson, Marc Deisenroth, and James Hensman. Identification of gaussian process state space models. *Advances in neural information processing systems*, 30, 2017. Paul Ghanem, Yunus Bicer, Deniz Erdogmus, and Alireza Ramezani. Efficient modeling of morphing wing flight using neural networks and cubature rules. *arXiv preprint arXiv:2110.01057*, 2021. Donald Goldfarb, Yi Ren, and Achraf Bahamou. Practical quasi-newton methods for training deep neural networks. *Advances in Neural Information Processing Systems*, 33:2386–2396, 2020. Robert Gower, Donald Goldfarb, and Peter Richtárik. Stochastic block bfgs: Squeezing more curvature out of data. In *International Conference on Machine Learning*, pp. 1869–1878. PMLR, 2016.
EGjvMcKrrl
In speak of enhancing the robustness of SSMs on different temporal dependencies, the authors take $1/\sqrt{\tau(\theta)}$ as a rescaling factor for initialization. Any theoretical guarantees (e.g. variance analysis) on the robustness comparing with the HiPPO framework?
FROM GENERALIZATION ANALYSIS TO OPTIMIZATION DESIGNS FOR STATE SPACE MODELS Anonymous authors Paper under double-blind review ABSTRACT A State Space Model (SSM) is a foundation model in time series analysis, which has recently been shown as an alternative to transformers in sequence modeling. In this paper, we theoretically study the generalization of SSMs and propose improvements to training algorithms based on the generalization results. Specifically, we give a data-dependent generalization bound for SSMs, showing an interplay between the SSM parameters and the temporal dependencies of the training sequences. Leveraging the generalization bound, we (1) set up a scaling rule for model initialization based on the proposed generalization measure, which significantly improves the robustness of the output value scales on SSMs to different temporal patterns in the sequence data; (2) introduce a new regularization method for training SSMs to enhance the generalization performance. Numerical results are conducted to validate our results. 1 INTRODUCTION Sequence modeling has been a long-standing research topic in many machine learning areas, such as speech recognition [Hinton et al., 2012], time series prediction [Li et al., 2019], and natural language processing [Devlin et al., 2019]. Various machine learning models have been successfully applied in sequence modeling to handle different types of sequence data, ranging from the (probabilistic) Hidden Markov model [Baum & Petrie, 1966] to deep learning models, e.g., Recurrent Neural Networks (RNNs), Long Short-Term Memory units [Hochreiter & Schmidhuber, 1997], Gated Recurrent Unit [Chung et al., 2014], and transformers [Vaswani et al., 2017]. In this paper, we focus on the state space model (SSM), which has a simple mathematical expression: \[ h'(t) = Ah(t) + Bx(t), \quad y(t) = Ch(t) + Dx(t) \] where \( h(t) \) is the hidden state, \( x(t) \) is the input sequence, \( y(t) \) is the output sequence and \( A, B, C, D \) are trainable parameters. Recent studies have demonstrated the power of SSMs in deep learning. For example, it was shown in [Gu et al., 2022a] that by a new parameterization and a carefully chosen initialization, the structured state space sequence (S4) model achieved strong empirical results on image and language tasks. Following the S4 model, more variants of SSMs are proposed, e.g., the diagonal SSM [Gu et al., 2022b], Gupta et al., 2022, the S5 model [Smith et al., 2023], the H3 model [Fu et al., 2023], the GSS model [Mehta et al., 2023], and the Hyena Hierarchy [Poli et al., 2023]. Theoretical analysis and understanding of the approximation and optimization of SSMs are well studied in the literature such as [Li et al., 2021, 2022, Gu et al., 2022a, 2023]. Since the SSM can be regarded as a continuous linear RNN model [Li et al., 2022], most generalization analysis of SSMs is based on the generalization theory of RNNs [Zhang et al., 2018, Chen et al., 2019, Tu et al., 2019]. However, these previous works did not study the effects of the temporal dependencies in the sequence data on the SSM generalization (See more details on the comparison in Section 4.1). As an attempt to understand the relationship between the temporal dependencies and the generalization performance, this paper aims to provide a generalization bound that connects the memory structure of the model with the temporal structure of the data. We can, in turn, use the proposed bound to guide us in designing new algorithms to improve optimization and generalization. Specifically, we discover two roles for the proposed generalization measure: (1) generalization bound as an initialization scheme; (2) generalization bound as a regularization method. The common initialization method for the S4 model and its variants follows from the HiPPO framework [Gu et al., 2022a]. \(^{1}\)To simplify the analysis, we omit the skip connection by letting \( D = 0 \) which is based on the prerequisite that the training sequence data is stable. To improve the robustness of the output value scales on SSMs to different temporal patterns in the sequence data, we consider to rescale the initialization of SSMs with respect to the generalization measure. This new initialization scheme makes the SSMs more resilient on their initial output value scales to variations in the temporal patterns of the training data. Except for the initialization setup, our generalization bound can also be served as a regularizer. Regularization methods like weight decay and dropout are widely applied to training SSMs, but the hidden state matrix $A$ is not regularized because its imaginary part controls the oscillating frequencies of the basis function $e^{At}B$ (Gu et al., 2022b). By taking into account the interaction between the SSM structure and the temporal dependencies, we introduce a new regularization method based on our bound, and it can be applied to the hidden state space to improve the generalization performance. When combining the initialization scheme and the regularization method, our method is applicable to various tasks, ranging from image classification to language processing, while only introducing a minimal computational overhead. To summarize, our contributions are as follows: - We provide a data-dependent generalization bound for SSMs by taking into account the temporal structure. Specifically, the generalization bound correlates with the memory structure of the model and the (auto)covariance process of the data. It indicates that instead of the weight or the data norm, it is the interplay between the memory structure and the temporal structure of the sequence data that influences the generalization. - Based on the proposed generalization bound, we setup an initialization scaling rule by adjusting the magnitude of the model parameters with respect to the generalization measure at initialization. This scaling rule improves the robustness of the initial output value scales on SSMs across different temporal patterns of the sequence data. - Apart from the initialization scheme, we design a new regularizer for SSMs. Unlike weight decay, our regularizer does not penalize the parameter norm but encourages the model to find a minimizer with lower generalization bound to improve the generalization performance. 2 RELATED WORKS Since a SSM is also a continuous linear RNN, there are three lines of research that are related to our work: generalization of RNNs, temporal structure analysis on RNNs, and optimization of SSMs. Generalization of RNNs. Existing works on the generalization of RNNs focus on the generalization error bound analysis. Specifically, in the early two works of Dasgupta & Sontag (1995) and Koiran & Sontag (1998), VC dimension-based generalization bounds were provided to show the learnability of RNNs. In recent studies, Zhang et al. (2018); Chen et al. (2019); Tu et al. (2019) proved norm-based generalization bounds, improving the VC dimension-based bounds by the Rademacher complexity technique (Bartlett & Mendelson 2002) under the uniform-convergence framework. In the overparameterization settings, it was shown in Allen-Zhu & Li (2019) that RNNs can learn some concept class in polynomial time given that the model size is large enough. These generalization bounds, however, do not take into account the temporal dependencies and their effects on generalization. In this work, we provide a new generalization bound by combining the memory structure of the model and the temporal structure of the data. Temporal structure analysis on RNNs. Sequence data has long-range temporal dependencies across the time domain, which notably set it apart from non-sequence data. Recent studies have studied the effects of such temporal dependencies on the approximation and optimization of RNNs. For example, in the two works of Li et al. (2021; 2022), a “curse of memory” phenomenon was discovered when using linear RNNs to model the temporal input-output relationships. Particularly, when the target relationship between the input and output has a long-term memory, then both approximation and optimization become extremely challenging. In Wang et al. (2023), the “curse of memory” phenomenon on approximation and optimization was extended to non-linear RNNs based on the temporal relationships. In this paper, we conduct a fine-grained analysis on the effects of the temporal structure analysis on the generalization of RNNs. Optimization of SSMs. RNN optimization is known for two issues: training stability and computational cost (Bengio et al., 1994; Pascanu et al., 2013). To address these two issues and capture the long dependencies more efficiently in sequence modeling, the S4 model was proposed by in- troducing new parameterization, initialization and discretization (Gu et al., 2022a). Recent variants for the S4 model simplified the hidden state matrix by a diagonal matrix to enhance computational efficiency (Gu et al., 2022b; Gupta et al., 2022; Smith et al., 2023; Orvieto et al., 2023). Regularization methods are also applied for SSMs to prevent overfitting, such as dropout, weight decay and the data continuity regularizer (Qu et al., 2023). However, the principled way to regularize and initialize the parameters still remains to be explored. In this study, we design a new regularization and initialization scheme to improve both optimization and generalization. 3 PRELIMINARIES In this section, we briefly introduce the SSM in Section 3.1 and the motivation for optimization designs based on the generalization analysis in Section 3.2. 3.1 INTRODUCTION TO SSMs In this paper, we consider the following single-input single-output SSM, \[ h'(t) = Ah(t) + Bx(t), \quad y(t) = Ch(t), \quad t \geq 0 \] where \( x \) is the input from an input space \( \mathcal{X} := C_0(\mathbb{R}_{\geq 0}, \mathbb{R}) \); \( y(t) \in \mathbb{R} \) is the output at time \( t \); \( h(t) \in \mathbb{R}^m \) is the hidden state with \( h(0) = 0 \); \( A \in \mathbb{R}^{m \times m}, B \in \mathbb{R}^{m \times 1}, C \in \mathbb{R}^{1 \times m} \) are trainable parameters. Then (1) has an explicit solution \( y(t) = \int_0^t \rho_\theta(s)x(t-s)ds \), where \( \rho_\theta(s) := Ce^{As}B \) with \( \theta = (C, A, B) \). The function \( \rho_\theta(s) \) captures the memory structure of the model and the temporal input-output relationship (Li et al., 2022). For the S4 model and its variants (Gu et al., 2022a,b; Gupta et al., 2022; Gu et al., 2023), (1) is usually discretized by the Zero-Order Hold method, i.e., given a timescale \( \Delta \in \mathbb{R}_+ \), \( h_{k+1} = Ah_k + Bx_k, \quad y_k = Ch_k, \quad k = 0, 1, \ldots \), where \( \tilde{A} = e^{\Delta A}, \tilde{B} = (\tilde{A} - I_m)\tilde{A}^{-1}B, \tilde{C} = C \). Then, \( y_k = \tilde{C}\tilde{A}^kBx_0 + \tilde{C}\tilde{A}^{k-1}Bx_1 + \ldots + \tilde{C}\tilde{B}x_k = [\tilde{K} * x]_k \) where \( \tilde{K} = (\tilde{CB}, \tilde{CAB}, \ldots, \tilde{CA}^kB) \) and \( * \) represents convolution. 3.2 MOTIVATION: A LINEAR REGRESSION MODEL In this subsection, we use a linear regression model on non-sequential data as an example to illustrate the connection between the generalization analysis and the optimization designs. This example then motivates us to extend the connection to SSMs on sequential data. Linear regression. We consider a simple linear model \( y = \theta^\top x \) with input \( x \in \mathbb{R}^d \), output \( y \in \mathbb{R} \) and parameter \( \theta \in \mathbb{R}^d \). Let the training data \( \{(x_i, y_i)\}_{i=1}^n \) be i.i.d. sampled from a distribution \( D \) such that \( \|x_i\|_2 = r, \|y_i\| \leq 1 \forall i \in [1:n] \). Define the empirical risk \( L_n(\theta) := \frac{1}{n} \sum_{i=1}^n (\theta^\top x_i - y_i)^2 \) and the population risk \( L_D(\theta) := \mathbb{E}_{x,y}[(\theta^\top x - y)^2] \). Then given a norm-constrained space \( \Theta := \{\theta \in \mathbb{R}^d : \|\theta\|_2 \leq R\} \), \[ \sup_{\theta \in \Theta} |L_n(\theta) - L_D(\theta)| \leq (rR + 1)^2 \cdot O(\sqrt{\log(1/\delta)/n}). \] This is a well-known norm-based generalization bound based on the Rademacher theory (Mohri et al., 2012), and we provide a proof in Appendix B for completeness. Notice that the key term \( r^2R^2 \) in the generalization bound (2) is also an upper bound for the magnitude of the linear model output, i.e., \( \sup_{\theta \in \Theta} (\theta^\top x_i)^2 \leq r^2R^2 \). Thus, we connect the model stability with the generalization bound stability, and this connection induces an initialization scheme for the initialization \( \theta^{(0)} \) by setting \( \|\theta^{(0)}\|_2 \sim O(1/r) \). In particular, if we normalize each input \( x_i \) such that \( r \) is also \( O(1) \), then \( \|\theta^{(0)}\|_2 \sim O(1) \). Since \( \theta^{(0)} \in \mathbb{R}^d \), one possible initialization scheme is that \( \theta^{(0)} \) follows a Uniform distribution \( U[-1/\sqrt{d}, 1/\sqrt{d}] \), which corresponds to the Kaiming initialization (up to some constant) (He et al., 2015). When treating the term \( r^2R^2 \) as a regularizer to improve the generalization, we get the weight decay method, i.e., the \( \ell_2 \) regularization w.r.t. \( \|\theta\|_2^2 \). We summarize the above logic chain that connects the generalization analysis with optimization designs in Figure 1. Now for SSMs, we extend the generalization analysis from non-sequential data to sequential data by taking into account the temporal structure of the data. This linear regression example motivates us to apply the same logic diagram (Figure 1) to the SSMs, and this is exactly what we are going to present in the following part of this paper. --- 2A linear space of continuous functions from \( \mathbb{R}_{\geq 0} \) to \( \mathbb{R} \) that vanishes at infinity. 4 MAIN RESULTS In this section, we first give a generalization bound for SSMs in Section 4.1, then we design a new initialization scheme in Section 4.2 based on this proposed bound. Apart from the initialization scheme, we introduce a new regularization method in Section 4.3. Finally, we conduct experiments to test the initialization scheme and the regularization method in Section 4.4. 4.1 A GENERALIZATION BOUND OF SSMs In this section, we present a generalization bound for the SSM (1) and reveal the effects of the temporal dependencies on the generalization performance. We show that our bound gives a tighter estimate compared with previous norm-based bounds through a toy example. Following the same notation in Section 3.1, we define the empirical risk $R_n(\theta)$ and the population risk $R_x(\theta)$ as $$R_n(\theta) := \frac{1}{n} \sum_{i=1}^{n} \left| \int_0^T \rho_\theta(T-s)x_i(s)ds - y_i \right|^2,$$ $$R_x(\theta) := \mathbb{E}_x \left[ \int_0^T \rho_\theta(T-s)x(s)ds - y \right]^2,$$ where $T > 0$ is some finite terminal time, the training sequence data $\{x_i(t)\}_{i=1}^{n}$ are independently sampled from a stochastic process with mean $\mathbb{E}[x(t)] := \mu(t)$ and covariance $\mathbb{E}[(x(s)-\mu(s))(x(t)-\mu(t))] := K(s,t)$, and the label $y$ is generated by some underlying functional $H_T : X \rightarrow \mathbb{R}$, i.e., $y = H_T(x)$. We assume that $|y| \leq 1$ for any $x \in X$, otherwise, we truncate the value of the label to 1. In the next, we make an assumption on the normalized process $\tilde{x}(t) := (x(t) - \mu(t))/\sqrt{K(t,t)}$: **Assumption 1.** The normalized process $\tilde{x}(t)$ is (1): almost surely Hölder continuous, i.e., $\exists L,H > 0, s.t.\forall s,t \in [0,T], |\tilde{x}(s) - \tilde{x}(t)| \leq L|s-t|^H a.s.;$ (2): is $\sigma^2$-sub-Gaussian for every $t \in [0,T]$, i.e., $\exists \sigma > 0, s.t.\forall u > 0, P(|\tilde{x}(t)| \geq u) \leq 2\exp(-u^2/2\sigma^2)$ for any $t \in [0,T]$. We leave the discussion of the assumption after the statement of the main theorem. Now we proceed to bound generalization gap $|R_x(\theta) - R_n(\theta)|$ by establishing uniform convergence of the empirical risk to its corresponding population risk, as stated in following theorem: **Theorem 1.** For a SSM $\int_0^T \rho_\theta(T-s)x(s)ds$, following notations and settings in Section 3.1 & 4.1 we define $\psi(\Theta) := \sup_{\theta \in \Theta} \int_0^T |\rho_\theta(T-s)|\sqrt{K(s,s)}ds + \sup_{\theta \in \Theta} \int_0^T \rho_\theta(T-s)\mu(s)ds$. Then under Assumption 1, given a parameter space $\Theta$ for $\theta$, for any $\delta \in (0,1)$, with probability at least $1-\delta$ over the training sequences, $$\sup_{\theta \in \Theta} |R_x(\theta) - R_n(\theta)| \leq (\psi(\Theta) + 1)^2 \cdot O(\log^{3/2}(Tn/\delta)/\sqrt{n}).$$ Where $O$ hides a constant that depends on $\sigma,L,H$. The proof is given in Appendix E. We see that this bound decreases to zero as the sample size $n \rightarrow \infty$, provided that the terminal time $T$ is finite and the supremum term in (3) is bounded. Theorem 1 captures the temporal dependencies of the sequence data on the SSM generalization, yielding that the mean and variance at each length position together play important roles in generalization analysis. Specifically, as long as $\psi(\Theta)$ is small, the generalization gap is small. Since the function $\rho_\theta(s)$ is exponentially decay, we do not require the mean and variance to be uniformly small along the time $t$ to get a small generalization gap. **Proof sketch.** The proof is based on Rademacher theory (Bartlett & Mendelson, 2002). The main difficulty is to bound the Rademacher complexity of the SSM function $\int_0^T \rho_\theta(T-s)x(s)ds$ for a stochastic process $x(s)$. We first use the Hölder inequality to get an upper bound for the Rademacher complexity w.r.t. the normalized process $\tilde{x}(s)$, then combining Hölder continuity and the heavy-tail property in Assumption 1, we show the finiteness of $\sup_{s \in [0,T]} \tilde{x}(s)$. Finally we use an $\varepsilon$-net argument to give an explicit bound for the Rademacher complexity, which then finishes the proof. Discussions of Assumption 1. This assumption contains two parts. Hölder continuity is used to bound \( \sup_{s \in [0,T]} \tilde{x}(s) \) and the Rademacher complexity of the SSM function class. By the Kolmogorov continuity theorem (Stroock & Varadhan, 1997), Hölder continuity covers a wide range of random processes that satisfy certain inequalities for its moments. For the sub-Gaussian property, it ensures \( \tilde{x}(s) \) is bounded in a finite time set with high probability. Sub-Gaussian random variables include Gaussian and any bounded variables. Specifically, for image classification tasks with flattened image pixels, if the range of the pixel values is a finite class, then the Hölder continuity condition can be dropped. We leave more detailed discussions and provide some concrete examples that satisfy Assumption 1 in Appendix C. Comparison to previous bounds. Since a SSM is also a continuous linear RNN model, we compare (3) with previous bounds for linear RNNs. In Chen et al. (2019), a generalization bound \( O(\|x\|_2 \|B\|_2 \|C\|_2 \|A\|_2 / \sqrt{n}) \) is provided, where \( \|x\|_2 \) is the 2-norm of the discrete input sequence. In the continuous case, \( \|x\|_2 \) corresponds to the \( L^2 \) norm w.r.t. a Dirac measure. By changing the matrix 2-norm to matrix 1-norm, Tu et al. (2019) shows another similar generalization bound. These bounds separate the data complexity and the model complexity by the data norm and the model parameter norm individually, and do not account for the temporal dependencies across the time domain. In this work, instead, we incorporate the temporal dependencies via the sequence statistics (mean and variance) to get a generalization bound. Next, we use a toy example to illustrate that our bound gives a tighter estimation. Given a stochastic process \( \{x(t)\}_{t \in [0,T]} \) with mean \( \mu(t) \) and covariance \( K(s,t) \), we consider the following two upscale transformations (by increasing \( T \) to \( 2T \)): 1. left zero padding: \( x_1(t) = 0, \ t \in [0,T); \quad x_1(t) = x(t-T), \ t \in [T,2T] \) 2. right zero padding: \( x_2(t) = x(t), \ t \in [0,T]; \quad x_2(t) = 0, \ t \in (T,2T] \) Then the two SSM outputs are given by \( y_i(2T) = \int_0^{2T} \rho_\theta(2T-s)x_i(s)ds \) for \( i = 1,2 \). Hence, \[ y_1(2T) = C \int_0^T e^{A(T-s)} B x(s) ds, \quad y_2(2T) = Ce^{AT} \int_0^T e^{A(T-s)} B x(s) ds. \] We see that the magnitude of \( y_1(2T) \) and \( y_2(2T) \) differs with an exponential factor \( e^{AT} \). Since all the eigenvalues of \( A \) have negative real part, \( y_2(2T) \to 0 \) as \( T \) increases. Hence, the right zero padding transformation degenerates the SSM function class to a zero function class for large \( T \), inducing a minimal generalization gap that only contains the statistical sampling error (see (3) by letting \( K(s,s) = \mu(s) = 0 \)). Therefore, a desired generalization bound should reflect such a difference caused by the different temporal dependencies. However, previous norm-based generalization bounds do not capture such a difference for these two transformations as they produce the same \( L^2 \) norm for the input sequence. Let us see what happens for our proposed generalization measure. For the left zero padding, the key term in (5) becomes \[ \int_0^T \left| Ce^{A(T-s)} B \right| \sqrt{K(s,s)} ds + \left| \int_0^T Ce^{A(T-s)} B \mu(s) ds \right| + 1 \] For the right zero padding, the key term in (5) becomes \[ \int_0^T \left| Ce^{AT} e^{A(T-s)} B \right| \sqrt{K(s,s)} ds + \left| \int_0^T Ce^{AT} e^{A(T-s)} B \mu(s) ds \right| + 1 \] The detailed derivations are given in Appendix D. By the same argument, our bound (3) indeed captures the difference on the magnitude of the generalization performance for these two sequence transformations. In particular, as \( T \to \infty \), (6) reduces to 1, which yields a minimal generalization gap as expected for the zero function class. In that sense, we get a tighter bound for the SSMs. Zero shot transferability. A benefit of SSMs is the zero-shot transfer to other sampling frequencies (i.e., the timescale measure in continuous case). For example, for a SSM function \( y_T = \int_0^T \rho_\theta(T-s)x(s)ds \), if we downscale the input sequence \( x(s) \) by half of the sampling frequency, then the SSM output becomes \( y_T = \int_0^{T/2} \rho_\theta(T-2s)x(2s)ds \), which equals to \( \int_0^T \frac{1}{2} \rho_\theta(T-s)x(s)ds \). Now for a new SSM parameter \( \theta = (2C,A,B) \), we have \( \rho_\theta(s) = 2\rho_\theta(s) \), indicating that by simply modifying the SSM parameters, one can transfer the model to half the sampling frequency while keeping the output invariant. One advantage for our generalization measure is that it is also zero shot transferable. To see this, we use the same example here. Under the downscale sampling, both \( \int_0^T |\rho_\theta(T-s)| \sqrt{K(s,s)} ds \) and \( \left| \int_0^T \rho_\theta(T-s)\mu(s)ds \right| \) remain invariant for the new parameter \( \tilde{\theta} \) because \( \sqrt{K(s,s)} \) and \( \mu(s) \) have the same scaling as \( x(s) \). Similarly, other sampling frequencies are also zero shot transferable for our generalization measure by simply adjusting the SSM parameters. 4.2 Generalization bound as an initialization scheme In this section, we design a scaling rule for the SSM parameters at initialization based on the generalization bound (3). This new initialization scheme improves the robustness of the initial output value scales on SSMs across different temporal patterns of the sequence data. Our proposed initialization scheme is built on the HiPPO based initialization [Gu et al., 2023], which is a data independent initialization method. Specifically, the HiPPO framework initializes the hidden state matrices \( A, B \) to produce orthogonal basis functions, and the matrix \( C \) to be standard normal for training stability. However, the argument for the training stability relies on the prerequisite that the input sequence is constant along the length ([Gu et al., 2023, Corollary 3.4]), which is restricted as the long-range dependencies may lead to very different temporal patterns on the input sequence. As the dashed lines in the left and the right part of Figure 2 show, the SSM output value scale and the loss value scale under the HiPPO based initialization vary much across different temporal dependencies, making the loss values inconsistent during training. To address this issue, we follow the logic diagram in Figure 1 by adjusting the generalization complexity to be \( O(1) \). Specifically, we extract the dominant term in the generalization bound (3): \[ \tau(\theta) := \left( \int_0^T |\rho_\theta(T-s)| \sqrt{K(s,s)} ds + \left| \int_0^T \rho_\theta(T-s)\mu(s)ds \right| \right)^2. \] Notice that \( \rho_\theta(s) = Ce^{As}B \), if we rescale \( C \) to \( \xi C \) for some \( \xi \in \mathbb{R} \), we have \( \tau(\tilde{\theta}) = \xi^2 \cdot \tau(\theta) \) for \( \tilde{\theta} = (\xi C, A, B) \). This induces a new initialization scheme, i.e., once the parameters \( \theta = (C, A, B) \) are initialized by the HiPPO method, we rescale \( C \) to \( \tilde{C} \) such that \[ \tilde{C} = \frac{1}{\sqrt{\tau(\theta)}} \cdot C = \frac{1}{\int_0^T |\rho_\theta(T-s)| \sqrt{K(s,s)} ds + \left| \int_0^T \rho_\theta(T-s)\mu(s)ds \right|} \cdot C. \] This rescaling method guarantees the SSM output value to be bounded at initialization for any stochastic process that satisfies Assumption 1, ensuring the robustness of the initial loss value scales on SSMs across different temporal dependencies. We formalize the statement in Proposition 1. **Proposition 1.** Consider a SSM \( \int_0^T \rho_\theta(T-s)x(s)ds \) with \( \theta = (C, A, B) \), for any stochastic process \( x(s) \) that satisfies Assumption 1, let \( \tilde{C} \) given by the rescaling method (8), then for \( \tilde{\theta} := (\tilde{C}, A, B) \), we have \( \mathbb{E}_x \left[ \int_0^T \rho_{\tilde{\theta}}(T-s)x(s)ds \right] \leq O(\sqrt{\log T}) \). The proof is provided in Appendix F. Proposition 1 shows that the SSM output values are uniformly bounded over all the stochastic processes that satisfy Assumption 1 even when the input sequence is not almost surely bounded. This improves the robustness of the output value scales on SSMs in the sense that the scale of the output value does not depend on the variations of the temporal structures. It is worth noting that different from the data normalization methods such as min-max normalization and standardization, our rescaling method only changes the model parameters. This is important because normalization on the data numerical values in language tasks can lead to loss of crucial information. For example, mathematical expressions like "\( \max(1,9) = 9 \)" have a contextual meaning where normalizing could result in the loss of structured information essential to understand. **Implementation.** In the practical training, the SSMs used for tasks such as image classification or language processing are usually deep and high dimensional (\( d > 1 \)), while our initialization scheme (8) is designed based on the one-dimensional shallow SSM. To extend to high-dimensional SSMs, we empirically treat all features to be independent and calculate \( \tau(\theta) \) by its average along the feature dimension. For a \( k \)-layer SSM with the initial matrix \( C_1, \ldots, C_k \) at each layer, we first calculate the complexity measure \( \tau_1(\theta) \) for the first layer and rescale \( C_1 \) by \( C_1/\sqrt{\tau_1(\theta)} \). Then we calculate the complexity measure $\tau_2(\theta)$ for the second layer for the updated input sequence of layer 2 and rescale $C_2$ by $C_2/\sqrt{\tau_2(\theta)}$. We repeat this process until the last layer. We describe the complete procedures for one-layer SSMs in Algorithm 1, where the $|\cdot|$ and $\sqrt{\cdot}$ in Line 5 represent to element-wise absolute value and element-wise square root respectively. $[\cdot]_L$ extracts the last position of an element obtained from the convolution. The $\text{Mean}(\cdot)$ operation in Line 6 calculates the mean value of a vector. **Algorithm 1** Training one-layer SSMs with the initialization scheme (8) **Input:** Training sequences $x_1, \ldots, x_n \in \mathbb{R}^{L \times d}$ with length $L$ and dimension $d$, a SSM initialization $\theta_0 = (C, A, B)$, a SSM kernel function $k(\theta) \in \mathbb{R}^{L \times d}$, number of epochs $s$ 1: for $i = 0$ to $s - 1$ do 2: if $i = 0$ then 3: Sample a minibatch sequence $x = (x^{(1)}, \ldots, x^{(B)}) \in \mathbb{R}^{B \times L \times d}$ 4: Compute the mean $\mu \in \mathbb{R}^{L \times d}$ and variance $K \in \mathbb{R}^{L \times d}$ for $x$ along the batch dimension 5: Compute $\tau(\theta_i)$ via convolution: $\tau(\theta_i) \leftarrow \left[|k(\theta_i)| * \sqrt{K} + |k(\theta_i) * \mu|\right]_L \in \mathbb{R}^d$ 6: Average over the feature dimension: $\tau(\theta_i) \leftarrow \text{Mean}^2(\tau(\theta_i))$ 7: Rescale by the initialization scheme (8): $\hat{C} \leftarrow C/\sqrt{\tau(\theta_i)}$ 8: Start to train with the updated initialization $(\hat{C}, A, B)$ 9: end if 10: Regular training procedure 11: end for **Output:** Updated model parameter $\theta_s$ ### 4.3 Generalization Bound as a Regularization Method In addition to its role as an initialization scheme, the generalization measure can also be regarded as a regularizer. In this section, we utilize the bound (5) to design a regularization method to improve the generalization performance, and simultaneously bring a little extra computational cost. For the generalization bound (3), we consider to use the dominant term (for large $T$) $\tau(\theta)$ defined in (7) as a regularizer. Then, the new empirical risk with regularization is given by $$\tilde{R}_n(\theta) := R_n(\theta) + \lambda \cdot \tau(\theta),$$ where $\lambda \geq 0$ is the regularization coefficient. When training multi-layer SSMs, we calculate the complexity $\tau(\theta)$ in (9) at each layer and add them together as a total regularization. We describe the training procedures for one-layer SSMs in Algorithm 2, where the notations follow Algorithm 1. **Algorithm 2** Training one-layer SSMs with the regularization method (9) **Input:** Training sequences $x_1, \ldots, x_n \in \mathbb{R}^{L \times d}$ with length $L$ and dimension $d$, a SSM initialization $\theta_0$, a SSM kernel function $k(\theta) \in \mathbb{R}^{L \times d}$, loss function $\tilde{R}(\cdot, \cdot) : \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$, regularization coefficient $\lambda$, optimizer $\text{OPT}$, number of epochs $s$ 1: for $i = 0$ to $s - 1$ do 2: Sample a minibatch input $x = (x^{(1)}, \ldots, x^{(B)}) \in \mathbb{R}^{B \times L \times d}$ with labels $(y^{(1)}, \ldots, y^{(B)})$ 3: Calculate the mean $\mu \in \mathbb{R}^{L \times d}$ and variance $K \in \mathbb{R}^{L \times d}$ for $x$ along the batch dimension 4: Compute the SSM output via convolution: $y \leftarrow [k(\theta_i) * x]_L \in \mathbb{R}^{B \times d}$ 5: Compute the regularization via convolution: $\tau(\theta_i) \leftarrow \left[|k(\theta_i)| * \sqrt{K} + |k(\theta_i) * \mu|\right]_L \in \mathbb{R}^d$ 6: Average over the feature dimension: $\tau(\theta_i) \leftarrow \text{Mean}^2(\tau(\theta_i))$ 7: Compute the total loss $L \leftarrow \frac{1}{B} \sum_{i=1}^{B} \tilde{R}(y_i, y^{(i)}) + \lambda \cdot \tau(\theta_i)$ 8: Parameters update: $\theta_{i+1} \leftarrow \text{OPT}(\theta_i, L)$ 9: end for **Output:** Updated model parameter $\theta_s$ ### Computational Cost Analysis From the training procedures in Algorithm 2, we can see that the newly introduced training complexity mainly comes from the calculation for the convolution between the SSM kernel and the sequence statistics ($\mu, K$). Since the convolution can be conducted by the fast Fourier transform (Gu et al., 2022a) with complexity $O(BdL \log L)$, then the new complexity for Algorithm 2 becomes $O((B + 2)dL \log L)$, which is acceptable in the practical training. Figure 2: Effects of the initialization scheme (8) on the model output, the gradient norm and the optimization under different temporal dependencies. (Left) The output $\mathbb{E}_x[|y_L|]$ at initialization w.r.t. the Gaussian white noise sequence $(x_1, \ldots, x_L)$ for length $L$ from 1 to 1000; (Middle) The gradient norm $|\nabla R_n(\theta)|$ at initialization w.r.t. the mean squared error (MSE) for varied sequence length; (Right) The training MSE curve for the Gaussian white noise with length $L = 1000$. Table 1: Training and test loss on the Gaussian white noise sequences with different coefficients $b$ after convergence. By adding the initialization scheme (8), SSMs achieve better optimization performance and are more robust on the final training loss value across different temporal dependencies. By adding the regularization term (9), SSMs get better generalization performance. 4.4 Experiments This section contains experiments to demonstrate the effectiveness of the proposed initialization scheme (8) and the regularization method (9). We use a synthetic sequence dataset and the Long Range Arena (LRA) benchmark (Tay et al., 2021) for numerical validations. To simplify the notation, we use w/o (8, 9), w (8), w (9) and w (8, 9) to represent the original base model, model trained with rescaling, model trained with regularization and model trained with both methods respectively. A synthetic dataset. We consider a synthetic sequence dataset generated by a centered Gaussian white noise with the covariance function $K(s, t) = \frac{1}{|b|\sqrt{\pi}} e^{-((s-t)/b)^2}$, which is a stationary Gaussian process and satisfies Assumption I (ref Section C). Then we can get different temporal dependencies by varying the coefficient $b$, i.e., as the magnitude of $b$ decreasing, the temporal dependence of the corresponding Gaussian white noise decreases as well. In particular, as $b \to 0$, $\frac{1}{|b|\sqrt{\pi}} e^{-(x/b)^2}$ becomes a delta function $\delta(x)$, entailing a zero temporal dependence for the sequence data. In the following experiment, we generate the sequence data by the Gaussian white noise with $b = [1, 0.1, 0.01]$. For each input sequence $(x_1, \ldots, x_L)$, its corresponding label is obtained by $\sin(x_{L/2})$, i.e., the sine value of the time-lagged input. We use the unidirectional S4-Legs model (Gu et al., 2022a) (that only contains the convolution layer) to train the sequence data. More details about the experiment setup are provided in Appendix A.1. In Figure 2, we plot the model output $\mathbb{E}_x[|y_L|]$ and the gradient norm $|\nabla R_n(\theta)|$ at initialization, and the training loss (w (8)) with different temporal patterns by varying the Gaussian white noise parameter $b$. We see that the initialization scheme (8) enhances the robustness of the output value scales (matches with Proposition 1), gradient norm at initialization and also the training loss value across different temporal structures. By comparing the final training loss with and without (8) in Table 1 (w/o (8, 9) vs w (8) and w (9) vs w (8, 9)), we see that adding the rescaling (8) also improves the training performance and makes the final training error more robust on different temporal dependencies (by varying $b$). For the regularization method (9), we compare the final test loss with and without (9) in Table 1 (w/o (8, 9) vs w (9) and w (8) vs w (8, 9)). We can see that our regularization method improves the generalization performance. Moreover, combining (8) and (9), the model get the best test performance across various temporal structures of the sequence data. | | LastOps | Text | Retrieval | Image | Pathfinder | PathX | |------------------|---------|------|-----------|-------|------------|-------| | **unidirectional S4-Legs** | | | | | | | | w/o | 59.45 | 79.27| 88.28 | 87.39 | 87.84 | | | w | 60.30 | 81.44| 89.38 | 88.11 | 87.95 | | | w (9) | **60.65**| **81.45**| **89.21**| **87.79**| **90.36**| | | w (8, 9) | 60.40 | **82.56**| **90.13**| **88.28**| 90.03 | | | Time / epoch, w | 2min 57s| 2min 4s| 2min 5min| 2min 5min| 2min 2min| 2min 2min| | Time / epoch, w (9) | 2min 35s| 1min 45s| 2min 15s| 2min 15s| 2min 15s| 2min 15s| | **bidirectional S4-Legs** | | | | | | | | w/o | 62.45 | 89.09| 91.21 | 89.32 | 95.75 | 89.17±1.71 | | w | 61.90 | 88.90| 91.44 | 89.52 | 95.43 | 89.67±0.18 | | w (9) | 61.09 | **89.27**| **91.32**| **89.93**| **95.80**| **90.21±0.16** | | w (8, 9) | 61.79 | 89.19| **91.46**| **89.80**| **95.86**| **89.85±0.72** | | Time / epoch, w | 2min 46s| 3min 06s| 18min 20s| 2min 12s| 4min 06s| 4min 06s| | Time / epoch, w (9) | 2min 18s| 3min 34s| 20min 30s| 2min 46s| 4min 20s| 4min 40s| | **bidirectional S4D-Legs** | | | | | | | | w/o | 57.80 | 83.91| 90.84 | 86.47 | 87.26 | 90.19±0.78 | | w | 57.25 | 84.79| 91.01 | 86.34 | 88.35 | 90.25±0.15 | | w (9) | 57.50 | 84.52| **91.08**| **87.33**| **87.21**| **90.19±0.34** | | w (8, 9) | **58.45**| **85.75**| **91.03**| **87.28**| **88.36**| **89.40±1.21** | | Time / epoch, w | 2min 19s| 2min 19s| 18min 15s| 1min 50s| 1min 50s| 1min 50s| | Time / epoch, w (9) | 2min | 2min 35s| 22min 36s| 1min 50s| 1min 50s| 1min 11s| Table 2: Test accuracy and running time (per epoch in A100 GPU) on the LRA benchmark under different settings for different models. The unidirectional model processes a sequence in one direction while the bidirectional model consists of two separate layers that process the sequence in opposite directions. Mean and standard error for the PathX results are reported based on 3 independent runs. **LRA benchmark.** We investigate the effects of the initialization scheme (8) and the regularization method (9) on the LRA benchmark. We consider three base models: unidirectional S4-Legs (Gu et al., 2022a), bidirectional S4-Legs (Goel et al., 2022) and bidirectional S4D-Legs (Gu et al., 2022b). Among these three models, the unidirectional S4-Legs is the one that is closest to our model setting (1), however, it performs poorly in challenge datasets. Thus, we do not use the unidirectional S4-Legs to train the PathX. We follow the training rules as described by Gu et al. (2023), but with adjustments to the model size. For example, the model sizes used to train the PathX for both S4-Legs and S4D-Legs are relatively small compared with the ones used in Gu et al. (2023) to save training time. More details on the dataset description and the experiment setup are given in Appendix A.2. By comparing the test accuracy for w/o (8, 9) vs w (9) and w (8) vs w (8, 9) in Table 2, we see that adding the regularization (9) enhances the generalization performance in most cases for all three models. In particular, when combining the initialization scheme (8) and the regularization (9), one gets the best test performance in half of tasks, indicating that our proposed optimization designs effectively improve the generalization performance. We also compare the running time without or with the proposed optimization designs. Since (8) is conducted before training which will not introduce additional training complexity, we report the running time for w/o (8, 9) and w (9) in Table 2. The results show that the regularization brings a little extra computational cost, matching the computational cost analysis in Section 4.3. We include an ablation study for the hyperparameter λ and add more experiment results in Appendix A.2. **5 DISCUSSION** In this work, we study the optimization and the generalization for SSMs. Specifically, we give a data-dependent generalization bound, revealing an effect of the temporal dependencies of the sequence data on the generalization. Based on the bound, we design two algorithms: a new initialization scheme and a regularization method, to improve the optimization and generalization for SSMs. There are still some gaps between the theory and the methodologies in this paper. The first one is that the skip connection matrix $D$ is omitted in our defined model (1). This will not affect our generalization bound because we may express the explicit solution for (1) as $y(t) = \int_0^t (\rho_\theta(s) + D\delta(s))x(t-s)ds$ where $\delta(\cdot)$ is a delta function, which is still a convolution model with a new kernel $\rho_\theta(s)+D\delta(s)$. However, the initialization scheme (8) only adjusts $C$ and requires the kernel function to be linear in $C$. Hence, (8) may not work well when $Dx(t)$ is much larger than $\int_0^t \rho_\theta(s)x(t-s)ds$. The second gap is that our theory is for single-layer linear SSMs. When nonlinearities are added, our generalization bound still works for single-layer SSMs if the nonlinearity does not affect the Hölder condition and the sub-Gaussian property (Assumption 1). For Lipschitz (also Hölder continuous) functions, there are some known examples (see Appendix G) where the sub-Gaussian condition remains after the nonlinearity. The extension of our theory to the multi-layer case is an interesting direction, which we leave for future work. Reproducibility The generalization bound (2) for linear regression is proved in Appendix B. The proof for Theorem 1 is provided in Appendix E. The derivations for (5) and (6) in Section 4.1 are given in Appendix D. The proof for Proposition 1 is in Appendix F. The details for the experiment settings are shown in Appendix A.1 and Appendix A.2. REFERENCES Zeyuan Allen-Zhu and Yuanzhi Li. Can sgd learn recurrent neural networks with provable generalization? Advances in Neural Information Processing Systems, 32, 2019. Ehsan Azmoodeh, Tommi Sottinen, Lauri Viitasaari, and Adil Yazigi. Necessary and sufficient conditions for hölder continuity of gaussian processes. Statistics & Probability Letters, 94:230–235, 2014. Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463–482, 2002. Leonard E Baum and Ted Petrie. Statistical inference for probabilistic functions of finite state markov chains. The annals of mathematical statistics, 37(6):1554–1563, 1966. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157–166, 1994. S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. OUP Oxford, 2013. Minshuo Chen, Xingguo Li, and Tuo Zhao. On generalization bounds of a family of recurrent neural networks. arXiv preprint arXiv:1910.12947, 2019. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. Bhaskar Dasgupta and Eduardo Sontag. Sample complexity for learning recurrent perceptron mappings. Advances in Neural Information Processing Systems, 8, 1995. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, June 2019. Daniel Y Fu, Tri Dao, Khaled Kamal Saab, Armin W Thomas, Atri Rudra, and Christopher Re. Hungry hungry hippos: Towards language modeling with state space models. In The Eleventh International Conference on Learning Representations, 2023. Karan Goel, Albert Gu, Chris Donahue, and Christopher Ré. It’s raw! audio generation with state-space models. In International Conference on Machine Learning, pp. 7616–7633. PMLR, 2022. Albert Gu, Karan Goel, and Christopher Re. Efficiently modeling long sequences with structured state spaces. In International Conference on Learning Representations, 2022a. Albert Gu, Ankit Gupta, Karan Goel, and Christopher Ré. On the parameterization and initialization of diagonal state space models. Advances in Neural Information Processing Systems, 35, 2022b. Albert Gu, Isys Johnson, Aman Timalsina, Atri Rudra, and Christopher Re. How to train your HIPPO: State space models with generalized orthogonal basis projections. In International Conference on Learning Representations, 2023. Ankit Gupta, Albert Gu, and Jonathan Berant. Diagonal state spaces are as effective as structured state spaces. In Advances in Neural Information Processing Systems, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034, 2015.
9zpOUsOvLM
6)Page 5, in equation (6), note that “D_1” is defined to be “ph()”, but in page 4, “ph()” is defined to be collections of “D_0”, “D_1”, etc. Further, $A^l$ is defined as the Hadamard product of $A^l$ and $D_1[1]- D_1[0]$. Here $A^l$ is a matrix and $D_1[1]- D_1[0]$ is a vector. They may have very different dimensions.'
ALIGNING PERSISTENT HOMOLOGY WITH GRAPH POOLING Anonymous authors Paper under double-blind review ABSTRACT Recently, there has been an emerging trend to integrate persistent homology (PH) into graph neural networks (GNNs) to enrich expressive power. However, naively plugging PH features into GNN layers always results in marginal improvement with low interpretability. In this paper, we investigate a novel mechanism for injecting global topological invariance into pooling layers using PH, motivated by the observation that filtration operation in PH naturally aligns graph pooling in a cut-off manner. In this fashion, message passing in the coarsened graph is performed along persistent sub-topology, leading to improved performance. Experimentally, we apply our mechanism to a collection of graph pooling methods and observe consistent and substantial performance gain over several popular datasets, demonstrating its wide applicability and flexibility. Code is open-sourced at https://anonymous.4open.science/r/TIP. 1 INTRODUCTION Persistent homology (PH) is a powerful tool in the field of topological data analysis, which is capable of evaluating stable topological invariant properties from unstructured data in a multi-resolution fashion (Edelsbrunner & Harer, 2022). Concretely, PH derives an increasing sequence of simplicial complex subsets by applying a filtration function (see Fig. 1(a)). According to the fact that PH is at least as expressive as Weisfeiler-Lehman (WL) hierarchy (Horn et al., 2021), there recently emerged a series of works seeking to merge PH into graph neural networks (GNNs), delivering competitive performance on specific tasks (Wong & Vong, 2021; Zhao et al., 2020; Horn et al., 2021). Standard schemes of existing works achieve this by employing pre-calculated topological features (Zhao et al., 2020) or placing learnable filtration functions in the neural architectures (Hofer et al., 2020; Horn et al., 2021). Such integration of PH features is claimed to enable GNNs to emphasize persistent topological sub-structures. However, it is still unclear to what extent the feature-level integration of PH is appropriate and how to empower GNNs with PH other than utilizing features. Graph pooling (GP) in parallel plays an important role in a series of graph learning methods (Grattarola et al., 2022), which hierarchically aggregates an upper-level graph into a more compact lower-level graph. Typically, GP relies on calculating an assignment matrix taking into account local structural properties such as community (Müller, 2023) and cuts (Bianchi et al., 2020). Though the pooling paradigm in convolutional neural networks (CNNs) is quite successful (Krizhevsky et al., 2012), some researchers raise concerns about its effectiveness and applicability in graphs. For example, Mesquita et al. (2020) challenges the local-preserving usage of GP by demonstrating that random pooling even leads to similar performance. Till now, it remains opaque what property should be preserved for pooled topology to better facilitate the downstream tasks. From Fig. 1(a), it is readily observed that PH and GP both seek to coarsen/sparsify a given graph in a hierarchical fashion: while PH gradually derives persistent sub-topology (substructures that have meaningful topology) by adjusting the filtering parameter, GP obtains a sub-graph by performing a more aggressive cut-off. In a sense of understanding a graph through a hierarchical lens, PH and GP turn out to align with each other well. Driven by this observation, in this paper, we investigate the mechanism of aligning PH and GP so as to mutually reinforce each other. To this end, we conduct experiments by running a pioneer GP method DiffPool (Ying et al., 2018) to conduct graph classification on several datasets and at the same time use the technique in Hofer et al. (2020) to compute PH information. We manually change pooling ratio and see what proportion of meaningful topological information (characterized by the ratio of non-zero persistence) is naturally preserved at the final training stage. Surprisingly, the correspondence is quite stable regardless of different datasets (see Fig. 1(b)), which implies the monotone trend between the pooling ratio and non-zero persistence is commonly shared by a large range of graph data. As a consequence, we develop a natural way to integrate PH and GP in both feature and topology levels. Concretely, in addition to concatenating vectorized PH diagram as supplementary features, we further enforce the coarsened graph to preserve topological information as much as possible with a specially designed PH-inspired loss function. Hence we term our method Topology-Invariant Pooling (TIP). TIP can be flexibly injected into a variety of existing GP methods, and demonstrates a consistent ability to provide substantial improvement over them. We summarize our contributions as follows: - We for the first time investigate the way of aligning PH with GP, by investigating the monotone relationship in between. - We further design an effective mechanism to inject PH information into GP at both feature and topology levels, with a novel topology-preserving loss function. - Our mechanism can be flexibly integrated with a variety of GP methods, achieving consistent and substantial improvement over multiple datasets. 2 RELATED WORK Graph pooling. Graph pooling has been used in various applications, which can reduce the graph size while preserving its structural information. Early methods are based on clustering to coarsen graphs, such as the greedy clustering method Graclus (Dhillon et al., 2007), non-negative matrix factorization of the adjacency matrix (Bacciu & Di Sotto, 2019), and spectral clustering (Ma et al., 2019). Recently, learnable graph pooling methods have gained popularity, which learn to select important nodes in an end-to-end manner. DiffPool (Ying et al., 2018) follows a hierarchical learning structure by utilizing GNNs to learn clusters and gradually aggregate nodes into a coarser graph. MinCutPool (Bianchi et al., 2020) optimizes a normalized cut objective to partition graphs into clusters. DMoNPool (Müller, 2023) optimizes the modularity of graphs to ensure high-quality clusters. SEP (Wu et al., 2022) generates clusters in different hierarchies simultaneously without compromising local structures. These methods are classified as dense pooling due to the space complexity they incur. Despite their effectiveness, dense pooling methods have been criticized for high memory cost and complexity (Cangea et al., 2018). Therefore, various sparse pooling methods have been proposed, such as Top-K (Gao & Ji, 2019), ASAPool (Ranjan et al., 2020), and SAGPool (Lee et al., 2019). These methods coarsen graphs by selecting a subset of nodes based on a ranking score. As they drop some nodes in the pooling process, these methods are criticized for their limited capacity to retain essential information, with potential effects on the expressiveness of preceding GNN layers (Bianchi & Lachi, 2023). Persistent homology in GNNs. PH is a technique to calculate topological features of structured data, and many approaches have been proposed to use PH in graph machine learning due to the high expressiveness of topological features on graphs (Hofer et al., 2017). Since isomorphic graphs may exhibit different topological features, the combination of PH and the Weisfeiler-Lehman (WL) algorithm leads to stronger expressive power (Rieck et al., 2019). This encourages further exploration on equipping GNNs with topological features. Zhao et al. (2020) propose that message passing in GNNs can be effectively reweighted using topological features. Hofer et al. (2020) and Horn et al. (2021) provide theoretical and practical insights that filtrations in PH can be purely learnable, enabling flexible usage of topological features in GNNs. However, existing methods tend to view PH merely as a tool for providing supplementary information to GNNs, resulting in only marginal improvements and limited interpretability. 3 BACKGROUND We briefly review the background of this topic in this section, as well as elaborate on the notations. Let $G = (V, E)$ be an undirected graph with $n$ nodes and $m$ edges, where $V$ and $E$ are the node and the edge sets, respectively. Nodes in attributed graphs are associated with features, and we denote by $V = \{(v, x_v)\}_{v \in V}$ the set of nodes $v$ with $d$ dimensional attribute $x_v$. It is also practical to represent the graph with an adjacency matrix $A \in \{0, 1\}^{n \times n}$ and the node feature matrix $X \in \mathbb{R}^{n \times d}$. Graph Neural Networks. We focus on the general message-passing GNN framework that updates node representations by iteratively aggregating information from neighbors (Gilmer et al., 2017). Concretely, the $k$-th layer of such GNNs can be expressed as: $$X^{(k)} = M(A, X^{(k-1)}; \theta^{(k)})$$ where $\theta^{(k)}$ is the trainable parameter, and $M$ is the message propagation function. Numbers of $M$ have been proposed in previous research (Kipf & Welling, 2016; Hamilton et al., 2017). A complete GNN is typically instantiated by stacking multiple layers of Eq. 1. Hereafter we denote by GNN($\cdot$) an arbitrary such multi-layer GNN for brevity. Dense Graph Pooling. GP in GNNs is a special layer designated to produce a coarsened or sparsified sub-graph. Formally, GP can be formulated as $G \mapsto G_P = (V_P, E_P)$ such that the number of nodes $|V_P| \leq n$. GP layers can be placed into GNNs in a hierarchical fashion to persistently coarsen the graph. Typical GP approaches (Ying et al., 2018; Bianchi et al., 2020; Müller, 2023) rely on learning a soft cluster assignment matrix $S^{(l)} \in \mathbb{R}^{n_{l-1} \times n_l}$: $$S^{(l)} = \text{softmax}\left(\text{GNN}^{(l)}(A^{(l-1)}, X^{(l-1)})\right).$$ Subsequently, the coarsened adjacency matrix at the $l$-th pooling layer is calculated as $$A^{(l)} = S^{(l)T}A^{(l-1)}S^{(l)}$$ and the corresponding node representations are calculated as $$X^{(l)} = S^{(l)T}\text{GNN}^{(l)}(A^{(l-1)}, X^{(l-1)}).$$ These approaches differ from each other in the way to produce $S$, which is used to inject a bias in the formation of clusters. In our work, we select three GP methods, i.e., DiffPool (Ying et al., 2018), MinCutPool (Bianchi et al., 2020), and DMoNPool (Müller, 2023), to cope with. Details of these methods are in Appendix A. Topological Features of Graphs. Simplicial complexes stand as the focal point within the realm of algebraic topology. An assembly of simplices of certain dimensions constitutes a simplicial complex denoted as $K$. A graph can be seen as a low-dimensional simplicial complex that only contains 0-simplices (vertices) and 1-simplices (edges) (Horn et al., 2021). The simplest kind of topological features describing graphs are Betti numbers, formally denoted as $\beta_0$ for the number of connected components and $\beta_1$ for the number of cycles. Despite the limited expressive power of these two numbers, it can be improved by evaluating them alongside a filtration. Filtrations are scalar-valued functions of the form \( f : V \cup E \rightarrow \mathbb{R} \), which assigns each vertex and edge a value. Changes in the Betti numbers, named as persistent Betti numbers, can subsequently be monitored throughout the progress of the filtration: by considering a threshold \( (a \in \mathbb{R}) \), we can analyze the subgraph originating from the pre-image of \( ((-\infty, a]) \) of \( f \), denoted as \( f^{-1}((-\infty, a]) \). The image of \( f \) leads to a set of node values \( a_1 < \cdots < a_n \) and generates a sequence of nested subgraphs of the form \( \emptyset \subseteq G_0 \subseteq \ldots \subseteq G_k \subseteq \ldots \subseteq G_n = G \), where \( G_k = (V_k, E_k) \) is a subgraph of \( G \) with \( V_k := \{ v \in V \mid f(x_v) \leq a_k \} \) and \( E_k := \{ (v, w) \in E \mid \max\{f(x_v), f(x_w)\} \leq a_k \} \). This process is also known as persistent homology (denoted as \( \text{ph}(\cdot) \)) on graphs. Typically, persistent Betti numbers are characterized in a persistence diagram (PD) as \( \text{ph}(G, f) = \{D_0, D_1, \ldots\} \), which is made up of tuples \((a_i, a_j) \in \mathbb{R}^2\), with \( a_i \) and \( a_j \) representing the creation and destruction of a topological feature respectively (see Fig. 1(c)). The absolute difference in function values \( |a_j - a_i| \) is called the persistence of a topological feature, where high persistence corresponds to features of the function, while low persistence is typically considered as noise (Horn et al., 2021; Rieck, 2023). Moreover, we follow the settings in previous works (Horn et al., 2021; Hofer et al., 2017) to extend \( D_1 \) as follows: (1) each cycle is paired with the edge that created it; (2) edges \( e \) that do not create a cycle (still in this circle) are assigned a ‘dummy’ tuple value, such as \((f(e), f(e))\); (3) all other edges will be paired with the maximum value of the filtration. Therefore, \( D_1 \) consists of as many tuples as the number of edges \( m \). 4 METHODOLOGY 4.1 OVERVIEW An overview of our method is shown in Fig. 2, where the shaded part corresponds to one layer of Topology-Invariant Pooling. The upper part is the GP process and the lower part is the injection of PH. Let \((A^{(0)}, X^{(0)})\) be the input graph. We consider to perform a GP at the \((l-1)\)-th layer. After obtaining a coarsened (densely connected) graph \((A^{(l)}, X^{(l)})\) with a standard GP method, we resample the coarsened graph using Gumbel-softmax trick as \( A^{(l)} \) in order to make it adapt to PH. Then, this coarsened graph is further reweighted injecting persistence, and is optimized by minimizing the topological gap \( L_{\text{topo}} \) from the original graph, yielding \((A^{(l)}, X^{(l)})\). By stacking multiple TIP layers, hierarchical pooling emphasizing topological information can be achieved. In the following sections, we elaborate on the detailed design of our mechanism. 4.2 TOPOLOGY-INARIANT POOLING In many real-world applications, such as molecular graphs analysis (Swenson et al., 2020; Hofer et al., 2020), topological features of graphs are of utmost importance. However, typical GNNs fail to capture certain topological structures in graphs, such as cycles (Bouritsas et al., 2022; You et al., 2021; Huang et al., 2022). Moreover, in dense graph pooling, graphs are pooled without preserving any topology. Even if we manage to make GNN topology-aware, the coarsened graph has no meaningful topology at all, impairing the use of GNNs in these tasks. To overcome these limitations, we propose to inject topological information into GP. We resort to PH to characterize the importance of edges. Note that for those edges do not form cycles, their creation and destruction are the same, leading to zero persistence. The core of PH is notion of filtration, which presents a challenging task to choose the right filtration. As the coarsened graph evolves in each training step, integrating PH into GP demands multiple computations of filtrations. To address this, we place learnable filtration (LF) functions for incorporating PH information, which is flexible and efficient as demonstrated by Hofer et al. (2020). LF relies on node features and graph topology, which are readily available in GP. Consequently, LF can be seamlessly integrated into GP with minimal computational overhead. Specifically, we employ an MLP network $\Phi(\cdot)$ as the filtration function together with sigmoid($\cdot$) to map node features $X \in \mathbb{R}^{n \times d}$ into $n$ scalar values. Recently, an increasing amount of attention has been devoted to cycles (Bouritsas et al., 2022; You et al., 2021; Huang et al., 2022) due to their significant relevance to downstream tasks in various domains such as biology (Koyutürk et al., 2004), chemistry (Murray & Rees, 2009), and social network analysis (Jiang et al., 2010). Recognizing that cycles offer an intuitive representation of graph structure, we focus on the one-dimensional PD associated with cycles. Following the standard way in GP (Eq. 2[3][4]), we additionally propose the subsequent modules to inject PH into GP at both feature and topology levels. Resampling. One major limitation of utilizing LF is that the computation process is unconscious of edge weights, i.e. edges with non-negative weights will be treated equally, so PH cannot directly extract meaningful topological features from $A^{(l)}$. Besides, rethinking GP in Eq. 3, the coarsened adjacency matrix has limited expressive power for two reasons. First, although $S^{(l)}$ is a soft assignment matrix obtained by softmax($\cdot$), each element still has nonzero values, i.e. $A^{(l)}$ is always densely connected. Second, the edge weights may span a wide range by multiplication (see Appendix D for empirical evidence). These drawbacks hinder the stability and generalization power of the subsequent message passing layers (Gong & Cheng, 2019). None of existing GP methods can handle these problems properly. Therefore, we resample the coarsened adjacency $A^{(l)}$ obtained from a normal GP layer (Eq. 3) as: $$A'^{(l)} = \text{resample}\left(\frac{A^{(l)} - \min(A^{(l)})}{\max(A^{(l)}) - \min(A^{(l)})}\right)$$ where $A^{(l)}$ is first normalized in the range of $[0, 1]$, and resample($\cdot$) is performed independently for each matrix entry using the Gumbel-softmax trick (Jang et al., 2016). In practice, only the upper triangular matrix is resampled to make it symmetric and we add self-loops to the graph. Persistence Injection. Now $A'^{(l)} \in \{0, 1\}^{n_l \times n_l}$ is a sparse matrix without edge features so we can easily inject topological information into it. For a resampled graph with $A'^{(l)}$ and $X^{(l)}$, we formulate the persistence injection as: $$D_1 = \text{ph}(A'^{(l)}, \text{sigmoid}(\Phi(X^{(l)}))), \quad A^{(l)} = A'^{(l)} \odot \text{to\_dense}(D_1[1] - D_1[0])$$ where $\odot$ is the Hadamard product, to\_dense($\cdot$) means transforming sparse representations in terms of edges to dense matrix representations, $D_1[i]$ is the $i$-th value in each tuple of $D_1$, and we denote the updated adjacency matrix after persistence injection still as $A^{(l)}$ for notation consistency. Persistence injection can actually be regarded as a reweighting process. Since the filtration values are within $[0, 1]$, $A^{(l)}$ after persistence injection is guaranteed to have edge weights in the range of $[0, 1]$ and is passed to the next pooling layer. Topological Loss Function. The aforementioned mechanism can explicitly inject topological information into graphs, but it relies on the condition that the coarsened graph retains certain essential sub-topology. To this end, we propose an additional loss function to guide the GP process. Intuitively, the coarsened graph should exhibit similarity to the original graph in terms of topology. Since the computation of PH is differentiable, one possible approach is to directly minimize the differences between the PDs of the original graph and the coarsened graph. However, this implementation would require computing the Wasserstein distance between two PDs through optimal transport (Yan et al., 2022), which is intractable in training due to its complexity. Considering that our objective is to estimate the difference, we instead propose vectorizing the PDs and minimizing their high-order statistical features (Okabe et al., 2018). Specifically, we use several transformations (denoted as \( \text{transform}(\cdot) \)) and concatenate the output, including the triangle point transformation, the Gaussian point transformation and the line point transformation introduced in Carrière et al. (2020) to convert the tuples in PD into vector \( h_t \) (\( t \in [1, m] \)). We calculate the mean vector \( \mu \) as well as the second-order statistics as the standard deviation vector \( \sigma \) as: \[ h_t = \text{transform}(D_1), \quad \mu = \frac{1}{m} \sum_{t=1}^{m} h_t, \quad \sigma = \sqrt{\frac{1}{m} \sum_{t=1}^{m} h_t \odot h_t - \mu \odot \mu} \] In this manner, the difference between two PDs can be estimated through the comparison of their statistics in the features, which is the concatenation of the mean and variance vectors. To further regularize the topological difference between layers, we introduce a topological loss term defined as: \[ L_{\text{topo}} = \frac{1}{Ld} \sum_{l=1}^{L} \sum_{i=1}^{d} \left( \left( \mu_i^{(l)} || \sigma_i^{(l)} \right) - \left( \mu_i^{(0)} || \sigma_i^{(0)} \right) \right)^2 \] where \( (\cdot || \cdot) \) stands for the concatenation operation, \( L \) is the number of pooling layers, and \( d \) is the feature dimension. Note that the intuition behind \( L_{\text{topo}} \) is different from the loss functions in existing graph pooling methods: the coarsened graph after pooling should be topologically similarly to the original graph rather than having exact cluster structures. ### 4.3 Analysis In this section we examine the validity of our proposed method, and in particular analyze its expressive power and complexity. **Theorem 1** The 1-dimensional topological features computed by persistence homology is sufficient enough to be at least as expressive as 1-WL in terms of distinguishing isomorphic graphs with self-loops, i.e. if the 1-WL label sequences for two graphs \( G \) and \( G' \) diverge, there exists an injective filtration \( f \) such that the corresponding 1-dimensional persistence diagrams \( D_1 \) and \( D'_1 \) are not equal. This result demonstrates that the 1-dimensional topological features contain sufficient information to potentially perform at least as well as 1-WL when it comes to distinguishing non-isomorphic graphs. We can then obtain the concluding remark that TIP is more expressive than other dense pooling methods by showing that there are pairs of graphs that cannot be distinguished by 1-WL but that can be distinguished by TIP. **Proposition 1.** TIP is isomorphic invariant. Detailed proof and illustrations of the theorem and proposition can be found in Appendix C. **Complexity.** Persistent homology can be efficiently computed for dimensions 0 and 1, with a worst-case time complexity of \( O(m \alpha(m)) \), where \( m \) represents the number of sorted edges in a graph. Here, \( \alpha(\cdot) \) represents the inverse Ackermann function, which is extremely slow-growing and can essentially be considered as a constant for practical purposes. Therefore, the primary factor that affects the calculation of PH is the complexity of sorting all the edges, which is \( O(m \log m) \). Our resampling and persistence encoding mechanism ensures that the coarsened graphs are sparse rather than dense, making our approach both efficient and scalable. ### 5 Experiments In the experiments, we evaluate the benefits of persistent homology on several state-of-the-art graph pooling methods, with the goal of answering the following questions: **Q1.** Is PH capable of preserving topological information during pooling? **Q2.** How does PH affect graph pooling in preserving task-specific information? To this end, we showcase the empirical performance of TIP on two tasks, namely, topological similarity (Section 5.2) and graph classification (5.3). Our primary focus is to assess in which scenarios topology can enhance GP. We further conduct ablation study to investigate the contributions of different modules, which are shown in Appendix E.2. 5.1 Experimental Setup Models. To investigate the effectiveness of PH in GP, we integrate TIP with DiffPool, MinCutPool, and DMoNPool, which are the pioneering approaches that have inspired many other pooling methods. Additionally, as most pooling methods rely on GNNs as their backbone, we compare the widely used GNN models GCN (Kipf & Welling, 2016), GIN (Xu et al., 2018), and GraphSage (Hamilton et al., 2017). We also look into another two related and State-of-the-Art GNN models, namely TOGL (Horn et al., 2021) and GSN (Bouritsas et al., 2022), which incorporate topological information and graph substructures into GNNs to enhance the expressive power. Furthermore, we compare several other GP methods, namely Graclus (Dhillon et al., 2007) and TopK (Gao & Jil, 2019). For model selection, we follow the guidelines provided by the original authors or benchmarking papers. Our method acts as an additional plug-in to existing pooling methods (referred to as -TIP) without modifying the remaining model structure and hyperparameters. Appendix B.1 provides detailed configurations of these models. Datasets. To evaluate the capabilities of our model across diverse domains, we assess its performance on a variety of graph datasets commonly used in graph classification tasks. We select seven benchmarks from TU datasets (Morris et al., 2020) and one benchmark from OGB datasets (Hu et al., 2020). Specifically, we adopt molecular datasets NCI1, NCI09, and OGBG-MOLHIV, bioinformatics datasets ENZYMES, PROTEINS, and DD, as well as social network datasets IMDB-BINARY and IMDB-MULTI. We use the default dataset settings (i.e. the number and type of features) from PyG library [1]. Furthermore, to investigate the topology-preserving ability of our method, we conduct experiments on several highly structured datasets (ring, torus, grid2d) obtained from the PyGSP library [2]. Appendix B.2 provides detailed statistics of the datasets. 5.2 Preserving Topological Structure In this experiment, we study Q1 about the ability of PH to preserve topological structure during pooling. Specifically, we assess the topological similarity between the original and coarsened graphs \( \mathcal{G} \) and \( \mathcal{G}' \), by comparing the Wasserstein distance associated with their respective PDs \( \mathcal{D}_1 \) and \( \mathcal{D}'_1 \). This evaluation criterion is widely used to compare graphs of different sizes (Wong & Vong, 2021; Yan et al., 2022). To calculate PD, we utilize Forman curvature on each edge of the graph as the filtration, which incorporates edge weights and graph clusters to better capture the topological features of the coarsened graphs (Sreejith et al., 2016; Wee & Xia, 2021). Specifically, we consider the 1-Wasserstein distance \( W(\mathcal{D}_1, \mathcal{D}'_1) = \inf_{\gamma \in \Pi(\mathcal{D}_1, \mathcal{D}'_1)} \mathbb{E}_{(x,y) \sim \gamma} \| x - y \| \) as the evaluation metric, where \( \Pi(\cdot) \) is the set of joint distributions \( \gamma(x, y) \) whose marginals are \( \mathcal{D}_1 \) and \( \mathcal{D}'_1 \), respectively. Note that we are not learning a new filtration but keep a fixed one. Rather, we use learnable filtrations in the training process to enhance flexibility, and solely optimize \( L_{topo} \) as the objective. We compare TIP with other pooling methods. Table I reports the average \( W \) values on three datasets, demonstrating that TIP can improve dense pooling methods to a large margin and have the best topological similarity. We visualize the pooling results in Fig. 3 for better interpretation, where isolated nodes with no links are omitted for clarity. It is evident that DiffPool, MinCutPool, and DMoNPool tend to generate dense graphs and fail to preserve any topological structures. Conversely, our method, which incorporates topological features using PH, sparsifies the coarsened graphs and reveals certain essential topological structures. Notably, in the ring and torus datasets, large cycles are clearly preserved by our method. It is also interesting to observe that the grid2d dataset, despite having a different spatial layout, exhibits similar topology to torus (with four adjacent nodes forming a small cycle), resulting in similar shapes of their corresponding coarsened graphs. This indicates that the objective function indeed contributes to preserving topological similarity to some extent. Sparse [1] https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.datasets.TUDataset.html [2] https://pygsp.readthedocs.io/en/stable/reference/graphs.html Table 1: Results to show the topology-preserving ability of different pooling methods. We show the Wasserstein distance (the smaller, the better) to assess the topological similarity. A **bold** value indicates the overall winner. | Methods | ring | torus | grid2d | |-----------------|---------------|--------------|---------------| | Graclus | 37.62 ± 4.41 | 124.47 ± 12.07 | 35.82 ± 0.93 | | TopK | 14.24 ± 1.06 | 35.15 ± 4.78 | 84.12 ± 2.21 | | DiffPool | 234.57 ± 9.49 | 237.89 ± 20.66 | 146.91 ± 6.05 | | DiffPool-TIP | **8.03 ± 3.08** | **17.97 ± 2.19** | **32.26 ± 3.21** | | MinCutPool | 232.60 ± 10.81 | 248.51 ± 15.69 | 155.16 ± 21.79 | | MinCutPool-TIP | 18.11 ± 5.59 | **11.38 ± 2.21** | 58.71 ± 9.84 | | DMoNPool | 224.48 ± 22.25 | 236.97 ± 16.54 | 142.85 ± 27.53 | | DMoNPool-TIP | 16.10 ± 4.80 | 17.34 ± 4.76 | 52.26 ± 5.75 | Figure 3: Coarsened graphs from different methods in the preserving topological structure experiment. Pooling methods, which tend to preserve local topology, perform slightly better than the original dense pooling methods. ### 5.3 Graph Classification In this experiment, we examine the impact of PH on GP in downstream tasks to answer Q2. We have observed in the former experiment that PH can preserve essential topological information during pool. However, two additional concerns arise: (1) Does TIP continue to generate invariant sub-topology in the downstream task? (2) If so, does this sub-topology contribute to the performance of the downstream task? To address these concerns, we evaluate TIP using various graph classification benchmarks, where the accuracy achieved on these benchmarks serves as a measure of a method’s ability to selectively preserve crucial information based on the task at hand. We begin by visualizing the coarsened graphs in this task, where edges are cut-off by a small value. From Fig. 4, we can clearly observe that our method manage to preserve the essential sub-topology similar to the original graphs, while dense pooling methods cannot preserve any topology. As discussed in Mesquita et al. (2020), dense pooling methods, such as DiffPool, achieve comparable performance when the assignment matrix $S$ is replaced by a random matrix. Here our visualization reveals that regardless of the value of $S$, the coarsened graph always approaches to a fully connected one. Sparse pooling methods, on the other hand, manage to preserve some local structures through clustering or dropping, but the essential global topological structures are destroyed. Table 2 presents the average and standard deviation of the graph classification accuracy on benchmark datasets. Additionally, the results of several baseline GNNs are provided. Experimental results demonstrate that TIP can consistently enhance the performance of the three dense pooling methods. While the original dense pooling methods sometimes underperform compared to the baselines, they are able to surpass them after integrating TIP. Among the different variants of dense pooling methods, Table 2: Test accuracy of graph classification on benchmark datasets. A **bold** value indicates the overall winner. Gray background indicates that TIP outperforms the base GP. | Methods | NCI1 | NCI109 | ENZYMES | PROTEINS | DD | IMDB-BINARY | IMDB-MULTI | OGBG-MOLHIV | |------------------|------|--------|---------|----------|-------|-------------|------------|-------------| | GCN | 77.81 ± 1.59 | 74.90 ± 1.85 | 32.51 ± 3.35 | 76.65 ± 3.14 | 78.66 ± 2.36 | 74.20 ± 2.49 | 53.23 ± 3.04 | 75.04 ± 0.84 | | GIN | 80.30 ± 1.70 | 79.66 ± 1.57 | 42.83 ± 3.66 | 77.18 ± 3.35 | 78.05 ± 3.60 | 72.65 ± 3.04 | 53.26 ± 3.16 | 76.03 ± 0.84 | | GraphSage | 80.85 ± 1.25 | 79.16 ± 1.28 | 39.17 ± 3.28 | 76.67 ± 3.05 | 78.83 ± 3.07 | 76.60 ± 2.37 | 53.46 ± 2.39 | 76.18 ± 1.27 | | TOGL | 80.53 ± 2.29 | 78.27 ± 1.39 | 46.09 ± 3.72 | 78.17 ± 2.80 | 76.10 ± 2.24 | 76.65 ± 2.75 | 53.87 ± 2.67 | 77.21 ± 1.33 | | GSN | 83.50 ± 2.40 | 80.24 ± 1.79 | 41.44 ± 3.46 | 74.15 ± 3.00 | N/A | 76.56 ± 2.00 | 52.60 ± 2.70 | 76.61 ± 1.74 | | Grachus | 80.82 ± 2.27 | 79.13 ± 1.79 | 38.35 ± 4.83 | 76.03 ± 2.94 | 76.97 ± 3.94 | 72.60 ± 4.24 | 53.66 ± 2.93 | 76.28 ± 0.67 | | TopK | 79.43 ± 3.50 | 77.96 ± 1.58 | 38.35 ± 4.83 | 76.03 ± 2.94 | 76.97 ± 3.94 | 72.60 ± 4.24 | 53.66 ± 2.93 | 76.28 ± 0.67 | | DiffPool | 77.64 ± 1.86 | 76.50 ± 2.32 | 48.34 ± 5.14 | 78.81 ± 3.12 | 80.27 ± 2.51 | 73.15 ± 3.30 | 54.32 ± 2.99 | 76.60 ± 1.04 | | DiffPool-TIP | 83.02 ± 1.70 | 81.09 ± 1.65 | 65.08 ± 4.24 | 79.86 ± 3.12 | 82.12 ± 2.53 | 76.40 ± 3.13 | 55.53 ± 2.92 | 77.75 ± 1.18 | | MinCutPool | 77.92 ± 1.67 | 75.88 ± 2.06 | 39.83 ± 2.63 | 78.25 ± 3.84 | 79.15 ± 3.51 | 73.80 ± 3.54 | 53.87 ± 2.95 | 75.60 ± 0.54 | | MinCutPool-TIP | 80.17 ± 1.29 | 79.48 ± 1.37 | 46.34 ± 3.85 | 79.73 ± 3.27 | 80.87 ± 2.47 | 75.20 ± 2.67 | 54.47 ± 2.27 | 77.18 ± 0.83 | | DMoNPool | 78.03 ± 1.64 | 76.62 ± 1.94 | 40.82 ± 3.68 | 78.63 ± 3.89 | 79.16 ± 3.61 | 73.50 ± 3.01 | 54.07 ± 3.08 | 76.30 ± 1.34 | | DMoNPool-TIP | 79.68 ± 1.38 | 78.46 ± 1.50 | 45.84 ± 5.32 | 79.73 ± 3.66 | 81.46 ± 2.96 | 74.25 ± 3.06 | 54.23 ± 2.64 | 76.70 ± 0.62 | Figure 4: Graphs pooled with different methods in graph classification experiment. DiffPool-TIP achieves the highest accuracy in most cases, this may be attributed to the fact that DiffPool applies three consecutive layers of GNNs after each pooling operation, while the other two methods only utilize one GNN layer. The coarsened graphs with invariant sub-topology are mostly sparsely connected, so additional layers of GNN may have a positive impact on passing crucial messages. Moreover, an intriguing observation can be found on ENZYMES, where TOGL significantly surpasses the baseline GNNs. TOGL in practice, incorporates PH into GNNs, so this results underscores the significance of incorporating topological information for improved performance on ENZYMES. Similarly, our method also demonstrates notable improvements by augmenting the three dense pooling methods on the ENZYMES dataset. However, it is worth noting that TOGL only exhibits marginal improvements or even underperforms on the other datasets. This suggests that simply integrating PH features into GNN layers does not fully exploit topological information. Conversely, injecting global topological invariance into pooling layers yields superior performance. Lastly, we provide the training curves in Appendix E.4, which demonstrate that incorporating meaningful topology leads to improved performance. 6 CONCLUSION In this paper, we developed a method named Topology-Invariant Pooling (TIP) that effectively integrates global topological invariance into graph pooling layers. This approach is inspired by the observation that the filtration operation in PH naturally aligns with the GP process. We theoretically showed that PH is at least as expressive as WL-test, with evident examples demonstrating TIP’s expressivity beyond dense pooling methods. Empirically, TIP indeed preserved persistent global topology information, and achieved substantial performance improvement on top of several pooling methods on various datasets, demonstrating strong flexibility and applicability. REFERENCES Davide Bacciu and Luigi Di Sotto. A non-negative factorization approach to node pooling in graph convolutional neural networks. In AI*IA 2019–Advances in Artificial Intelligence: XVIIIth International Conference of the Italian Association for Artificial Intelligence, Rende, Italy, November 19–22, 2019, Proceedings 18, pp. 294–306. Springer, 2019. Filippo Maria Bianchi and Veronica Lachi. The expressive power of pooling in graph neural networks. arXiv preprint arXiv:2304.01575, 2023. Filippo Maria Bianchi, Daniele Grattarola, and Cesare Alippi. Spectral clustering with graph neural networks for graph pooling. In International conference on machine learning, pp. 874–883. PMLR, 2020. Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):657–668, 2022. Cătălina Cangea, Petar Veličković, Nikola Jovanović, Thomas Kipf, and Pietro Liò. Towards sparse hierarchical graph classifiers. arXiv preprint arXiv:1811.01287, 2018. Mathieu Carrière, Frédéric Chazal, Yuichi Ike, Théo Lacombe, Martin Royer, and Yuhei Umeda. Perslay: A neural network layer for persistence diagrams and new graph topological signatures. In International Conference on Artificial Intelligence and Statistics, pp. 2786–2796. PMLR, 2020. Inderjit S Dhillon, Yuqiang Guan, and Brian Kulis. Weighted graph cuts without eigenvectors a multilevel approach. IEEE transactions on pattern analysis and machine intelligence, 29(11):1944–1957, 2007. Herbert Edelsbrunner and John L Harer. Computational topology: an introduction. American Mathematical Society, 2022. Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019. Hongyang Gao and Shuiwang Ji. Graph u-nets. In international conference on machine learning, pp. 2083–2092. PMLR, 2019. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pp. 1263–1272. PMLR, 2017. Liyu Gong and Qiang Cheng. Exploiting edge features for graph neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9211–9219, 2019. Daniele Grattarola, Daniele Zambon, Filippo Maria Bianchi, and Cesare Alippi. Understanding pooling in graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2022. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. Christoph Hofer, Roland Kwitt, Marc Niethammer, and Andreas Uhl. Deep learning with topological signatures. Advances in neural information processing systems, 30, 2017. Christopho Hofer, Florian Graf, Bastian Rieck, Marc Niethammer, and Roland Kwitt. Graph filtration learning. In International Conference on Machine Learning, pp. 4314–4323. PMLR, 2020. Max Horn, Edward De Brouwer, Michael Moor, Yves Moreau, Bastian Rieck, and Karsten Borgwardt. Topological graph neural networks. arXiv preprint arXiv:2102.07835, 2021. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118–22133, 2020.
QJGj07PD9C
While the method of using tanh pre-activation before each FFT seems to avoid numerical instability, it would have been better if some theoretical justification was given (even for simplistic cases). I believe similar theory should have been provided for the learning rate schedule.
Guaranteed Approximation Bounds for Mixed-Precision Neural Operators Renbo Tu\textsuperscript{1}, Colin White\textsuperscript{2}, Jean Kossaifi\textsuperscript{3}, Boris Bonev\textsuperscript{3}, Gennady Pekhimenko\textsuperscript{1}, Kamyar Azizzadenesheli\textsuperscript{3}, Anima Anandkumar\textsuperscript{2} \textsuperscript{1} University of Toronto, \textsuperscript{2} Caltech, \textsuperscript{3} NVIDIA Abstract Neural operators, such as Fourier Neural Operators (FNO), form a principled approach for learning solution operators for partial differential equations (PDE) and other mappings between function spaces. However, many real-world problems require high-resolution training data, and the training time and limited GPU memory pose big barriers. One solution is to train neural operators in mixed precision to reduce the memory requirement and increase training speed. However, existing mixed-precision training techniques are designed for standard neural networks, and we find that their direct application to FNO leads to numerical overflow and poor memory efficiency. Further, at first glance, it may appear that mixed precision in FNO will lead to drastic accuracy degradation since reducing the precision of the Fourier transform yields poor results in classical numerical solvers. We show that this is not the case; in fact, we prove that reducing the precision in FNO still guarantees a good approximation bound, when done in a targeted manner. Specifically, we build on the intuition that neural operator learning inherently induces an approximation error, arising from discretizing the infinite-dimensional ground-truth input function, implying that training in full precision is not needed. We formalize this intuition by rigorously characterizing the approximation and precision errors of FNO and bounding these errors for general input functions. We prove that the precision error is asymptotically comparable to the approximation error. Based on this, we design a simple method to optimize the memory-intensive half-precision tensor contractions by greedily finding the optimal contraction order. Through extensive experiments on different state-of-the-art neural operators, datasets, and GPUs, we demonstrate that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy. 1 Introduction Real-world problems in science and engineering often involve solving systems of partial differential equations (PDEs) (Strang, 2007). These problems typically require a fine grid for numerical solvers to guarantee convergence, e.g., climate modeling and 3D fluid dynamics simulations, and the full-scale solutions to these real-world systems are out of reach even for the world’s largest supercomputers (Schneider et al., 2017). To overcome the computational challenges of numerical solvers, fast surrogates using machine learning have been developed. Among them, neural operators are a powerful data-driven technique for solving PDEs (Li et al., 2020a; 2021a; Kovachki et al., 2023; Lu et al., 2021). Neural operators learn maps between function spaces, and they can be used to approximate the solution operator of a given PDE. The input and output functions in neural operators can be at any resolution or on any mesh, and the output function can be evaluated at any point in the domain; therefore, neural operators are discretization convergent: once the neural operator is trained, it can be evaluated, without any retraining, at any resolution, and it converges to a unique limit under mesh refinement (Kovachki et al., 2023). By learning from discretized data from input and solution functions, the trained models can perform inference orders of magnitude faster than traditional PDE solvers (Kovachki et al., 2023). In particular, the Fourier Neural Operator (FNO) and its extensions have been successful in solving... Figure 1: **Top:** example data points from each dataset: Navier-Stokes, Darcy Flow, Spherical Shallow Water, Shape-Net Car CFD, and Ahmed-body CFD. **Bottom:** performance of our method compared to full-precision and AMP on each dataset. For each dataset, we plot test error (y-axis) and GPU memory (x-axis), and we annotate the maximum throughput (proportional to the area of each ball). All data are measured on the same hardware (RTX 3090 Ti) and the same virtual environment. Memory decreases by up to 50%, while $L^2$ loss increases by at most 0.28%. PDE-based problems with significant speedups (Li et al., 2021a; 2023; Bonev et al., 2023; Kossaifi et al., 2023; Liu et al., 2022; Gopakumar et al., 2023). Despite their successes, training of neural operators is still compute- and memory-intensive when faced with extremely high-resolution and large-scale problems. For conventional deep learning models, there is a wealth of knowledge on automatic mixed precision (AMP) training in order to reduce memory usage and increase computational throughput (Rakka et al., 2022). We find that their direct application to neural operators results in poor memory efficiency and numerical overflow; when applied to FNO, most memory-intensive operations are in the spectral domain, which is complex-valued and not handled well by the standard AMP, as seen in Figure 1. Conventional wisdom may suggest that reducing precision in FNO without drastic accuracy degradation is not feasible. The presence of the Fourier transform in FNO appears to be a limitation in reducing precision since it suffers from numerical instabilities on high-frequency modes under reduced precision. Hence, numerical solvers, such as pseudo-spectral solvers, do indeed require very high precision to ensure numerical stability (Trefethen, 2000), and time-stepping leads to an accumulation of round-off errors under low precision that quickly explode. However, we show both theoretically and empirically that this is not the case for FNO and other operator learning approaches. **Our approach:** In this work, we devise the first mixed-precision training method for neural operators and also derive approximation bounds that guarantee the expressivity of mixed-precision operators. We use the intuition that the Fourier transform within neural operators is already approximated by the discrete Fourier transform, since the training dataset is a discrete approximation of the ground-truth continuous signal. Intuitively, since we already incur approximation error from discretization, there is no need to run the discrete Fourier transform in full precision. Building on this, we show that, in a single FNO layer, the round-off error due to lower precision is small. Therefore, the overall round-off error remains small since a full FNO architecture is only made up of a few layers—much smaller than the number of steps in a classical pseudo-spectral solver where errors accumulate and explode. We make the above intuitions concrete, and we develop the following theoretical approximation bounds: we characterize the precision error and resolution error of the FNO block, proving asymptotic bounds of $n^{-2/d}$ and $\epsilon$, respectively, for mesh size $n$, dimension $d$, and dynamic range $\epsilon$. Therefore, we justify using mixed precision since its error is comparable to the discretization error already present in FNO. Motivated by these theoretical results, we introduce a mixed-precision training method by optimizing the memory-intensive half-precision tensor contractions. We devise a simple and lightweight greedy strategy to find the optimal contraction order, which considers vectorization for each intermediate tensor. Unfortunately, naively training neural operators in mixed precision leads to prohibitive numerical instability due to overflows in the FNO block. To address this issue, we study numerical stabilization techniques for neural operators, finding that \(\tanh\) pre-activation before each FFT consistently avoids numerical instability. Further, we incorporate additional vectorization steps for complex-valued inputs, not present in standard packages designed for real-valued networks. We carry out extensive experiments to show a significant reduction in memory usage and improved throughput without sacrificing accuracy; see Figure 1. We consider three variants of FNO: tensorized (TFNO) (Kossaifi et al., 2023), spherical (SFNO) (Bonev et al., 2023), and geometry-informed (GINO) (Li et al., 2023). Across different datasets and GPUs, our method results in up to 58% improvement in training throughput and 50% reduction in GPU memory usage with little or no reduction in accuracy. Furthermore, we show that our method is discretization convergent via zero-shot super-resolution experiments. Finally, we propose a precision schedule training routine, which transitions from mixed to full precision during training; we show this method achieves better than the baseline full-precision accuracy. Our mixed-precision training routine is lightweight and easily added to new neural operators. We release our codebase and all materials needed to reproduce our results at https://github.com/neuraloperator/neuraloperator. We summarize our main contributions below: - We introduce the first mixed-precision training routine for neural operators, which optimizes the memory-intensive tensor contraction operations in the spectral domain, and we propose using \(\tanh\) pre-activations to minimize numerical instability. - We theoretically ground our work by characterizing the precision and discretization errors of the FNO block, showing that these errors are comparable, proving that, done right, mixed-precision training of neural operators leads to little or no performance degradation. - We empirically verify the superiority of our mixed-precision training approach on three state-of-the-art neural operators, TFNO, GINO, and SFNO, across four different datasets and GPUs. Our method uses half the memory and increases training throughput up to 58% across different GPUs with little or no reduction in accuracy (< 0.1%). - We provide an efficient implementation of our approach in PyTorch, which we open-source, along with all data needed to reproduce our results. ## 2 BACKGROUND AND RELATED WORK **Neural Operators.** Many real-world scientific and engineering problems rely on solving partial differential equations (PDEs). As such, there is a recent focus on using machine learning-based methods to solve PDEs (Gupta et al., 2021; Bhatnagar et al., 2019; Lu et al., 2019; Adler & Öktem, 2017). However, most of these methods use standard neural networks and are therefore limited to a mapping between fixed input and output grid. In other words, they operate on a fixed, regular discretization and cannot learn the continuous mapping between function spaces. **Neural operators** are a new technique that addresses this limitation by directly learning maps between function spaces (Li et al., 2021b; 2020a;b; Kovachki et al., 2023). The input and output functions to neural operators can be in any resolution or mesh, and the output function can be evaluated at any point in the domain; therefore, neural operators are discretization convergent: once the neural operator is trained, it can be evaluated, without any retraining, at any resolution, and it converges to a unique limit under mesh refinement (Kovachki et al., 2023). **FNO and extensions.** The Fourier neural operator, inspired by spectral methods, is a highly successful neural operator (Li et al., 2021a; Gopakumar et al., 2023; Renn et al., 2023; Wen et al., 2022). Let \(A : \{a : D_A \rightarrow \mathbb{R}^{d_A}\}\) and \(U : \{u : D_U \rightarrow \mathbb{R}^{d_U}\}\) denote the input and output function spaces, respectively. In this work, we consider the case where \(D_A = D_U \subset \mathbb{R}^d\) for \(d \in \mathbb{N}\). Given a dataset of pairs of initial conditions and solution functions \(\{(a_j, u_j)\}_{j=1}^N\), which are consistent with an operator \(G(a_j) = u_j\) for all \(1 \leq j \leq N\), the goal is to learn a neural operator \(G_\theta\) that approximates \(G\). The primary operation in FNO is the Fourier convolution operator, \((K v_t)(x) = \mathcal{F}^{-1}(R \cdot T_K(\mathcal{F} v_t))(x)\), \(\forall x \in D\), where \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) denote the Fourier transform and its inverse, \(R\) denotes a learnable transformation, \(T_K\) denotes a truncation operation, and \(v_t\) denotes the function at the current layer of the neural operator. We use the discrete Fast Fourier Transform (FFT) and its inverse (IFFT) to implement this operator on discrete data. Building on FNO, Kossaifi et al. (2023) introduced the Tensorized Fourier Neural Operator (TFNO). Building on tensor methods, which have proven very successful in deep learning Panagakis et al. (2021; 2024), TFNO computes a tensor factorization (Kolda & Bader, 2009) of the weight tensors in the spectral domain, acting as a regularizer that boosts performance while decreasing parameter count. Additionally, two FNO-based neural operators have recently been proposed to improve performance in non-regular geometries. The Spherical Fourier Neural Operator (SFNO) (Bonev et al., 2023) is an extension of FNO to the spherical domain, which is achieved by the use of the spherical convolution theorem (Driscoll & Healy, 1994). The Geometry-Informed Neural Operator (GINO) (Li et al., 2023) is a highly efficient FNO-based architecture for solving PDEs with varying, irregular geometries. The architecture consists of an FNO, along with graph neural operators to transform the irregular grid inputs into and from regular latent grids on which FNO can be efficiently applied (Li et al., 2020a). Mixed-Precision Training. Mixed-precision training of neural networks consists of reducing runtime and memory usage by representing input tensors and weights (and performing operations) at lower-than-standard precision. For example, PyTorch has a built-in mixed-precision mode called automatic mixed precision (AMP), which places all operations at float16 rather than float32, with the exception of reduction operations, weight updates, normalization operations, and specialized operations (Paszke et al., 2019). bfloat16 and TF32 are common mixed-precision methods, yet they do not support discrete Fourier transforms, which are essential to FNOs (Kalamkar et al., 2019; Dean et al., 2012). There is a variety of work targeted at quantization for inference, yet these works do not reduce memory during training (Goel et al., 2020). While well-studied for standard neural nets, mixed-precision training has not been studied for FNO (Zhao et al., 2022; De Sa et al., 2018; Micikevicius et al., 2017; Jia et al., 2018). The most similar work to ours is FourCastNet (Pathak et al., 2022), a large-scale climate model that uses mixed precision with Adaptive Fourier Neural Operators (Guibas et al., 2021). However, mixed precision is not applied to the FFT or complex-valued multiplication operations, a key challenge the current work addresses. Very recently, another work studies the method of quantization for FNO at inference time (Dool et al., 2023). However, unlike our work, they only study methods that improve memory usage at inference time, not training time. Another paper has appeared recently, which proposes to apply mixed precision to PINNs and DeepONet (Hayford et al., 2024). 3 GUARANTEED APPROXIMATION BOUNDS In this section, to motivate our mixed-precision neural operator training routine in Section 4, we theoretically show that the inherent discretization error in the FNO block, from computing the discrete Fourier transform instead of the Fourier transform, is comparable to the precision error, from computing the FNO block in half precision rather than in full precision. We present our results in terms of a forward Fourier transform, and we give the full details and discussion in Appendix A. We start by formally defining the discretization error of the FNO block. Let $D$ denote the closed unit cube $[0, 1]^d$ for dimension $d \in \mathbb{N}$, subdivided into $n = m^d$ cubes $Q_1, \ldots, Q_n$ with sidelength $\frac{1}{m}$ and for each $j$, let $\xi_j$ denote the point with the minimum value in each dimension, in $Q_j$. Let $v : D \to D$ denote an intermediate function within the FNO. Recall from the previous section that the primary operation in FNO is the Fourier convolution operator, $(Kv_t)(x)$. We say the discretization error of $F(v)$ is the absolute difference between the Fourier transform of $v$, and the discrete Fourier transform of $v$ via discretization of $v$ on $Q = (\{Q_1, \ldots, Q_n\}, \{\xi_1, \ldots, \xi_n\})$. Formally, given Fourier basis function $\varphi_\omega(x) = e^{2\pi i \langle \omega, x \rangle}$, $$\text{Disc}(v, Q, \omega) = \left| \int_D v(x)\varphi_\omega(x)dx - \sum_{j=1}^n v(\xi_j)\varphi_\omega(\xi_j)|Q_j| \right|. \quad (1)$$ In other words, if we knew the true (infinite-dimensional) input function, we could run FNO using the (continuous) Fourier transform. However, for real-world applications, we only have access to a discretization of the input, for example, on a $128 \times 128$ grid, so we incur a discretization error. We bound the discretization error as follows. For the full proof details, see Appendix A. **Theorem 3.1.** For any $M > 0$ and $L \geq 1$ let $K \subset C(D)$ be the set of $L$-Lipschitz functions, bounded by $||v||_\infty \leq M$. Then for all $n, Q$, there exist $\omega, c_1, c_2 > 0$ such that $$c_1 \sqrt{d} \cdot Mn^{-2/d} \leq \sup_{v \in K} (\text{Disc}(v, Q, \omega)) \leq c_2 \sqrt{d}(|\omega| + L)Mn^{-1/d}. $$ Intuitively, we bound the Riemann sum approximation of the true integral by showing that in the case where the function $v$ is bounded and Lipschitz, then each of the $n$ intervals contributes $n^{-(1+\frac{1}{d})}$ error, up to parameters of the function. Note that the discretization error upper bound also scales linearly for the frequency $\omega$, although we empirically show in Section 4 that the energy is concentrated in the lower frequency modes. The lower bound is satisfied when $v(x) = x_1 \cdots x_d$. Furthermore, note that given a function $v$, if $n$ is not large enough, the discretization error can become arbitrarily large due to aliasing. For example, the function $v(x) = M \sin(2\pi(n + \omega)x)$ has discretization error $\Omega(M)$. Next, we say that the precision error of $F(v)$ is the absolute difference between $F(v)$ and $\tilde{F}(v)$, computing the discrete Fourier transform in half precision. Specifically, we define an $(a_0, \epsilon, T)$-precision system as a mapping $q : \mathbb{R} \rightarrow S$ for the set $S = \{0\} \cup \{a_0(1+\epsilon)^i\}_{i=0}^T \cup \{-a_0(1+\epsilon)^i\}_{i=0}^T$, such that for all $x \in \mathbb{R}$, $q(x) = \text{argmin}_{y \in S} |x - y|$. This represents a simplified version of the true mapping used by Python from $\mathbb{R}$ to float32 or float16 (we give further justification of this definition in Appendix A). Then, we define $$\text{prec}(v, Q, q, \omega) = \left| \sum_i v(\xi_i)\varphi_\omega(\xi_i)|Q_i| - \sum_i q(v(\xi_i))q(\varphi_\omega(\xi_i))|Q_i| \right|.$$ Now, we bound the precision error as follows. **Theorem 3.2.** For any $M > 0$ and $L \geq 1$ let $K \subset C(D)$ be the set of $L$-Lipschitz functions, bounded by $||v||_\infty \leq M$. Furthermore let $q$ be an $(a_0, \epsilon, T)$-precision system. Then for all $n, Q, \omega$, there exists $c > 0$ such that $$\sup_{v \in K} (\text{prec}(v, Q, q, \omega)) \leq c \cdot \epsilon M.$$ Once again, we show that each interval contributes $\epsilon/n$ error up to the function’s parameters. Taken together, Theorem 3.1 and Theorem 3.2 show that asymptotically, the discretization error can be as large as $Mn^{-2/d}$ (up to constants), while the precision error is always bounded by $\epsilon M$. For example, for float16 precision ($\epsilon = 10^{-4}$), our theory suggests that the precision error is comparable to the discretization error for three-dimensional meshes up to size $1,000,000$. Given this motivation, we present our mixed-precision training pipeline in the next section. ### 4 Empirical Study of Our Mixed-Precision Neural Operator We empirically study our proposed mixed-precision pipeline for FNO training and demonstrate significant memory usage reduction and large increases in training throughput on various GPUs. We then study failure modes of a naïve application of mixed precision and present our empirically justified pre-activation-based stabilizer solution. Finally, we perform extensive experiments and ablations on our end-to-end training method across different architectures and datasets. #### 4.1 Datasets and Experimental Setup We summarize each of the four datasets we use; see Appendix B.2 for detailed descriptions. **Navier-Stokes.** We consider the Navier-Stokes equations for a viscous, incompressible fluid in vorticity form on the unit torus. We use the same dataset as Kossaifi et al. (2023), with a Reynolds number of 500, composed of 10,000 training samples and 2,000 test samples with resolution $128 \times 128$. **Darcy Flow.** We consider the steady-state 2D Darcy Flow equation, which models fluid flow through a porous medium. We use the same dataset as Li et al. (2021a), with 5,000 training samples and 1,000 test samples at resolution $128 \times 128$. **Spherical Shallow Water Equations.** We use the dataset from Bonev et al. (2023), which generates random initial conditions on the sphere at resolution $256 \times 512$. At each epoch, 120 training samples and 20 validation samples are generated on the fly. **Shape-Net Car and Ahmed-body.** Our final two datasets are 3D real-world car dataset generated by prior work (Umetani & Bickel, 2018; Li et al., 2023), which consists of mesh points that represent a unique 3D car, and the goal is to predict the full 3D pressure field. We use 611 water-tight shapes from car surfaces from Shape-Net (Chang et al., 2015), with 500 samples for training and the rest for the test set. For Ahmed-body, we have 500 for training and 51 for test. The spatial resolution for both is $64 \times 64 \times 64$. **Experimental Setup.** We run the Navier Stokes and Darcy flow experiments on the TFNO architecture (Kossaifi et al., 2023). For the Spherical SWE, we use SFNO (Bonev et al., 2023) to handle the spherical geometry, and for Shape-Net Car and Ahmed-body, we use GINO (Li et al., 2023) to handle the large-scale, irregular geometry. All models have the FNO as backbone. We use the official implementation and default hyperparameters for all models. 4.2 Mixed-Precision Module for Complex-Valued Tensor Contraction Now we introduce our theoretically and empirically motivated mixed-precision pipeline for the FNO block. We start by profiling FNO training workloads, identifying the complex-valued tensor contraction within the FNO block as the computational bottleneck, accounting for 4 out of the 5 most time-consuming GPU kernels in the entire training pipeline (see Appendix B.4 for the full profiling results). Furthermore, existing mixed-precision tools such as Pytorch’s Automatic Mixed Precision (AMP) (Paszke et al., 2019) leave the operator in full precision since it only autocasts real-valued modules in FNO. We introduce a simple and lightweight workaround: we break down each tensor contraction into sub-expressions consisting of at most two terms and perform each of these contractions by temporarily converting tensors to reals. To optimize the memory usage, we use a simple greedy algorithm to select the next \texttt{einsum} step that minimizes the intermediate tensor size (see Figure 2). The complex-valued operations, including the forward FFT, tensor contraction, and inverse FFT, are done in half-precision. This completes the half-precision FNO block as AMP manages non-FNO, real-valued operations. Our simple half-precision FNO block module achieves up to 38.6% reduction in memory and up to 50% reduction in memory when combined with Pytorch’s native AMP; see Figure 3. Note that the reduction from AMP + Half-Prec FNO is greater than the sum of its parts due to lack of the additional casting to and from half-precision. The memory usage reduction can then be used to train on a larger batch size, thereby improving the overall training throughput and efficiency. On three different Nvidia GPUs, RTX 3090 Ti, V100, and RTX A6000, we demonstrate a consistent improvement in training throughput, in terms of numbers of training samples processed per second, from 1.23X to 1.58X over the baseline in Figure 4. 4.3 Numerical Stability via Pre-Activation Mixed-precision training is prone to underflow and overflow because its dynamic range is significantly smaller than that of full precision (Rakka et al., 2022). Notably, for all four of the datasets in our study, naively running FNO in mixed precision results in training failure due to NaN outputs. Furthermore, we empirically show that many common solutions, including loss scaling, gradient clipping, normalization, and delaying updates, all fail to address the numerical instability of mixed-precision FNO (see Appendix B.6). To mitigate this overflow issue, we find that pre-activation before each forward FFT is a very effective method for overcoming numerical instability. We also find that the \texttt{tanh} pre-activation is the highest-performing operation that we considered according to Appendix B.6. Unlike other functions, \texttt{tanh} minimizes changes to small inputs, as it is approximately the identity function near 0 and is smoothly differentiable. Furthermore, \texttt{tanh} preserves the discretization-convergent property of FNO. The strong performance of \texttt{tanh} is also predicted by our theoretical results in Section 3, since it decreases the $L_\infty$ norm and the Lipschitz constant of the input function. In Appendix B.6, we further show that the \texttt{tanh} pre-activation minimally alters the frequency-domain signal in both amplitude and phase. Finally, we demonstrate through an ablation that \texttt{tanh} has negligible impact on the model’s final error (See Appendix Table 5). Figure 3: **GPU memory usage reduction across different variants of neural operators on diverse tasks.** We use an Nvidia RTX 3090 Ti GPU. Our method reduces memory by up to 50%, representing a super-linear combination of the two other methods because it avoids additional casting to and from full precision during the forward pass. Figure 4: **Training throughput and runtime as a function of the method, on different GPUs.** For mixed-precision FNO + AMP, we consistently observe an improvement of training throughput up to $1.58\times$ over the baseline with the TFNO model on Navier Stokes, and up to $1.33\times$ with the SFNO model on Spherical Shallow Water Equations (SWE). Our method also improves upon using only AMP in throughput by over $1.3\times$ on Navier-Stokes and by over $1.2\times$ on Spherical SWE. Batch sizes are selected to fully utilize each GPU. ### 4.4 ERROR COMPARISON WITH FULL-PRECISION Having resolved the critical issue of numerical stability, we demonstrate that our mixed-precision approach achieves errors within 1% of the full-precision baseline across the four datasets. Additionally, we propose a precision-scheduling technique that transitions from mixed to full precision during training, which performs better than full precision in zero-shot super-resolution inference. **Mixed- vs. Full-Precision Training Curves.** Figure 5 illustrates that our mixed-precision approach achieves test errors on par with full precision throughout the training process, remaining within 1% of the full-precision baseline. We ensure apples-to-apples comparison by keeping all hyperparameters constant across the two precision settings. These hyperparameter configurations are originally optimized for full precision training, so we show that they are directly transferable to mixed precision. **Precision Scheduling and Zero-Shot Inference.** An important property of neural operators is their *discretization convergence*, meaning that they can be trained on one resolution and tested on a higher resolution (zero-shot super-resolution) (Kovachki et al., 2023). To achieve the best result, we propose a precision schedule, in which the first 25% of training is in mixed-precision, the middle 50% applying only AMP, and the final 25% in full precision. This follows a simple intuition: in the early stages of training, it is okay for gradient updates to be coarser, since the gradient updates are larger overall. However, in the later stages of training, the average gradient updates are much smaller, so Figure 5: Test H1 error curves for FNO on the Navier-Stokes (top left) and Darcy flow (top right) datasets. Test L2 error curves for GINO on the Shape-Net Car (bottom left) and for SFNO on the Shallow Water Equation (bottom right) datasets. Each plot shows the mean of three random seeds and standard deviation as error bars. We compute and report the average difference between training curves in the legend. We also annotate the difference in final test errors with the memory savings of our mixed precision approach. | | 128x128 | 256x256 | 512x512 | 1024x1024 | |------------------|---------|---------|---------|-----------| | | $H^1$ | $L^2$ | $H^1$ | $L^2$ | | Full FNO | 0.00557 | 0.00213 | 0.00597 | 0.00213 | | Mixed FNO (Ours) | 0.00624 | 0.00236 | 0.00672 | 0.00228 | | Precision schedule (Ours) | **0.00503** | **0.00170** | **0.00542** | **0.00170** | Table 1: Zero-shot super resolution. We test zero-shot super-resolution by training each model on 128 × 128 resolution for 19 hours. Mixed precision has a small decrease in accuracy compared to full precision, and using a precision schedule achieves significantly better accuracy. full precision is more important. We use the Navier-Stokes dataset, trained on 128 × 128 resolution (the same setting and model as Figure 5), and tested on 256 × 256, 512 × 512, and 1024 × 1024 resolutions; see Table 1. We find that half-precision has a small decrease in accuracy compared to full precision, and using a precision schedule achieves significantly better generalization. 4.5 Comparison against U-Nets Despite being originally designed for computer vision tasks, U-Nets have recently been used as PDE surrogates (Ronneberger et al., 2015). Here, we compare FNOs against the U-Net baseline on the Darcy Flow and Navier-Stokes datasets. As shown in Table 2, FNO outperforms UNet: our mixed-precision approach yields higher memory reduction compared to AMP applied to U-Nets. 4.6 Ablation studies Here, we perform ablations of our mixed-precision procedure on different parameterizations of FNOs, regularization via reducing frequency modes, and training with other numerical systems. Decomposition of FNO weights. On the Navier-Stokes and Darcy Flow datasets, we adopt a Canonical-Polyadic (CP) factorization of the FNO weights using Kossaifi et al. (2019) to ensure better final error. For a generic problem, FNO weights are usually not factorized and are saved in | Model | Navier-Stokes | Darcy Flow | |---------------|--------------|------------| | | Error | Memory Reduction | Error | Memory Reduction | | Full FNO | **0.003** | | 0.01 | | | Mixed FNO (Ours) | 0.004 | 50.4% | **0.007** | 25.8% | | Full U-Net | 0.111 | | 0.024 | | | U-Net + AMP | 0.111 | 20.9% | 0.022 | 24.9% | Table 2: Comparison of $L^2$ error and memory reduction with U-Nets. FNO consistently shows better final error, and our mixed-precision approach yields significantly more memory reduction on Navier-Stokes than AMP applied to U-Nets. Figure 6: For Navier-Stokes (Left) and Darcy Flow (Right), our method accelerates training, whether with CP factorized weights or Dense unfactorized weights, without losing more than 1% in error. their original, dense form. As shown in Figure 6, we experiment with both scenarios and show that our mixed-precision approach improves runtime without sacrificing accuracy. Number of Frequency Modes. Recall that in the FNO architecture, after the FFT, we truncate to a fixed fraction of frequency modes to improve generalization, typically $\frac{1}{3}$ to $\frac{2}{3}$. We run an ablation study on the number of frequency modes used in the FNO architecture. We run frequency modes \{16, 32, 64, 128\} on the Darcy flow dataset in full and half precision; see Figure 14. We find that using too few frequency modes hurts accuracy substantially while using too many frequency modes increases runtime substantially. There is not a significant difference between half-precision and full precision for all frequencies. In addition, we show in Appendix B.10 that for synthetic data, the precision error from half precision is higher for higher frequencies relative to the amplitude. Other Mixed Precision Options. We also experimented with BrainFloat16 (BF16) and TensorFloat32 (TF32). However, PyTorch does not support BF16 for discrete Fourier transforms, which are essential to FNOs. Even when applied to the rest of the network, BF16 suffers from error degradation on Navier Stokes, possibly due to having fewer precision bits than FP16. On the other hand, TF32 is not as efficient as our approach, even on optimized hardware such as the A100 GPU. Moreover, we simulated FP8 training via clipping. See Appendix B.11 for additional details. 5 CONCLUSIONS AND FUTURE WORK In this work, we introduced the first mixed-precision training method for neural operators. We derived approximation bounds for mixed-precision neural operator and rigorously characterized the approximation and precision errors of FNO-based operators, proving that the precision error is comparable to the approximation error. Using this insight, we show, in practice, a significant reduction of memory usage and improvement in throughput, without sacrificing accuracy, solving critical issues from standard neural net-based training methods. Through extensive experiments on different state-of-the-art neural operators, datasets, and hardware, we demonstrate that our approach reduces GPU memory usage by up to 50% with little or no reduction in accuracy. Overall, half-precision FNO makes it possible to train on significantly larger data points with the same batch size. Going forward, we plan to apply this to real-world applications that require super-resolution to enable larger-scale training. ACKNOWLEDGMENTS AA is supported by the Bren Foundation and the Schmidt Sciences through the AI 2050 senior fellow program. This project and GP were supported, in part, by the Canada Foundation for Innovation JELF grant, NSERC Discovery grant, AWS Machine Learning Research Award (MLRA), Facebook Faculty Research Award, Google Scholar Research Award, and VMware Early Career Faculty Grant. We would like to express our sincere thanks to Nikola Kovachki for insightful suggestions and discussions, which were invaluable for completing this work. We thank members of the EcoSystem lab, especially Shang Wang, for their constructive feedback. REFERENCES Jonas Adler and Ozan Öktem. Solving ill-posed inverse problems using iterative deep neural networks. *Inverse Problems*, 33(12):124007, 2017. Syed R Ahmed, G Ramm, and Gunter Faltin. Some salient features of the time-averaged ground vehicle wake. *SAE transactions*, pp. 473–503, 1984. Saakaar Bhatnagar, Yaser Afshar, Shaowu Pan, Karthik Duraisamy, and Shailendra Kaushik. Prediction of aerodynamic flow fields using convolutional neural networks. *Computational Mechanics*, 64:525–545, 2019. Boris Bonev, Thorsten Kurth, Christian Hundt, Jaideep Pathak, Maximilian Baust, Karthik Kashinath, and Anima Anandkumar. Spherical fourier neural operators: Learning stable dynamics on the sphere. In *Proceedings of the International Conference on Machine Learning (ICML)*, 2023. Gary J Chandler and Rich R Kerswell. Invariant recurrent solutions embedded in a turbulent two-dimensional kolmogorov flow. *Journal of Fluid Mechanics*, 722:554–595, 2013. Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. *arXiv preprint arXiv:1512.03012*, 2015. Christopher De Sa, Megan Leszczynski, Jian Zhang, Alana Marzoev, Christopher R Aberger, Kunle Olukotun, and Christopher Ré. High-accuracy low-precision training. *arXiv preprint arXiv:1803.03383*, 2018. Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc’Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, et al. Large scale distributed deep networks. *Advances in neural information processing systems*, 25, 2012. Winfried van den Dool, Tijmen Blankevoort, Max Welling, and Yuki M Asano. Efficient neural pde-solvers using quantization aware training. *arXiv preprint arXiv:2308.07350*, 2023. J.R. Driscoll and D.M. Healy. Computing fourier transforms and convolutions on the 2-sphere. *Advances in Applied Mathematics*, 15:202–250, 6 1994. ISSN 01968858. doi: 10.1006/aama.1994.1008. URL https://linkinghub.elsevier.com/retrieve/pii/S0196885884710086. Abhinav Goel, Caleb Tung, Yung-Hsiang Lu, and George K Thiruvathukal. A survey of methods for low-power deep learning and computer vision. In *2020 IEEE 6th World Forum on Internet of Things (WF-IoT)*, pp. 1–6. IEEE, 2020. Vignesh Gopakumar, Stanislas Pamela, Lorenzo Zanisi, Zongyi Li, Anima Anandkumar, and MAST Team. Fourier neural operator for plasma modelling. *arXiv preprint arXiv:2302.06542*, 2023. John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, and Bryan Catanzaro. Adaptive fourier neural operators: Efficient token mixers for transformers. In *Proceedings of the International Conference on Learning Representations (ICLR)*, 2021.
g16vmAtJ8x
If so, you are leaking information, thereby completely invalidating your DP. Could you clarify this in detail, as DP is a large section of your work and your conclusions strongly suggest that DP cannot mitigate your attack.
ON THE INADEQUACY OF SIMILARITY-BASED PRIVACY METRICS: RECONSTRUCTION ATTACKS AGAINST “TRULY ANONYMOUS SYNTHETIC DATA” Anonymous authors Paper under double-blind review ABSTRACT Training generative models to produce synthetic data is meant to provide a privacy-friendly approach to data release. However, we get robust guarantees only when models are trained to satisfy Differential Privacy (DP). Alas, this is not the standard in industry as many companies use ad-hoc strategies to empirically evaluate privacy based on the statistical similarity between synthetic and real data. In this paper, we review the privacy metrics offered by leading companies in this space and shed light on a few critical flaws in reasoning about privacy entirely via empirical evaluations. We analyze the undesirable properties of the most popular metrics and filters and demonstrate their unreliability and inconsistency through counter-examples. We then present a reconstruction attack, ReconSyn, which successfully recovers (i.e., leaks all attributes of) at least 78% of the low-density train records (or outliers) with only black-box access to a single fitted generative model and the privacy metrics. Finally, we show that applying DP only to the model or using low-utility generators does not mitigate ReconSyn as the privacy leakage predominantly comes from the metrics. Overall, our work serves as a warning to practitioners not to deviate from established privacy-preserving mechanisms. 1 INTRODUCTION Synthetic data – i.e., artificially generated data produced by machine learning algorithms – has attracted growing interest not only from the research community (Jordon et al., 2022), but also regulatory bodies (Information Commissioner’s Office, 2022; Financial Conduct Authority, 2023), non-profits (UN, 2023; OECD, 2023), and government agencies (Benedetto et al., 2018; NIST, 2018; 2020). It promises a drop-in replacement for sensitive data in various use cases, e.g., private data release, de-biasing, augmentation, etc. Numerous providers of synthetic data solutions have entered a flourishing market attracting considerable investments (Crunchbase, 2022; TechCrunch, 2022; Forbes, 2022), with products serving large corporations in various sectors. The basic idea behind synthetic data is to rely on generative machine learning models, learning the probability distribution of the real data and creating new (synthetic) records by sampling from the trained model. However, models trained without robust privacy guarantees can overfit and memorize individual data points (Carlini et al., 2019b; Webster et al., 2019), which enables attacks like membership and property inference (Hayes et al., 2019; Hilprecht et al., 2019; Chen et al., 2020; Stadler et al., 2022; Annamalai et al., 2023). This, in turn, could lead to disastrous breaches and leakage of individuals’ health, financial, and other sensitive data. Main Motivation. The established framework to bound information leakage and defend against privacy attacks is Differential Privacy (DP) (Dwork et al., 2006; Dwork & Roth, 2014). Specifically, for synthetic data, one needs to train generative models while satisfying DP (Zhang et al., 2017; Jordon et al., 2018; McKenna et al., 2021). While almost all companies in this space claim their synthetic data products meet regulatory requirements such as GDPR, HIPAA, or CCPA, we find that they rarely use DP, as shown in App. A. This is worrisome, as models are often trained on sensitive data in highly regulated environments (e.g., medical applications (Hradec et al., 2022)). Rather than relying on well-established privacy notions, many companies use ad-hoc heuristics to guarantee privacy empirically; see, e.g., (Platzer & Reutterer [2021], Mobey Forum [2022]). Some combine unperturbed heuristics with DP, breaking the end-to-end DP pipeline, which ultimately negates its privacy protections, as our evaluation will demonstrate. In fact, even research papers, e.g., in the medical domain, have proposed models that exclusively rely on similar empirical privacy heuristics ([Park et al., 2018], [Lu et al., 2019], [Yale et al., 2019], [Zhao et al., 2021], [Guillaumeux et al., 2023], [Liu et al., 2023], [Yoon et al., 2023]). **Problem Statement.** The heuristics used in industry mainly consist of privacy metrics and filters based on similarity (see Sec. 2), i.e., how close synthetic records are to their nearest neighbor in the train data. If enough synthetic points are too close, according to pre-configured statistical tests vs. holdout test data, they are filtered out, or the whole sample is discarded: otherwise, the data is considered safe. The idea is that synthetic data should be similar and representative of the train data but not too close, which intuitively makes sense. However, the meaningfulness of these heuristics has not been rigorously studied. This motivates assessing the validity of entirely relying on empirical evaluation based on distances and whether this approach risks providing false protection claims. **Technical Roadmap.** We explore, characterize, and analyze the major disadvantages of the most commonly used privacy metrics/filters in industry. (In App. B, we also show counter-examples whereby, even if all privacy tests pass, privacy violations and inconsistencies can still occur.) Then, we propose ReconSyn, a proof of concept black-box attack designed to highlight the inherent weaknesses of the privacy metrics in synthetic data generation. The attack recovers train data from low-density regions (where the most at-risk records reside) with realistic assumptions. Besides the privacy metrics, the adversary can only access a single fitted generative model. In fact, the attack is agnostic to the generative approach, the type of dataset, and use case, etc. **Experimental Evaluation.** In addition to the counter-examples, we present experiments demonstrating the effectiveness of ReconSys vis-à-vis five state-of-the-art tabular models (PrivBayes, MST, DPGAN, PATE-GAN, CTGAN) and five commonly used datasets, including Adult, Census, and MNIST. The attack reconstructs at least 78% of the underrepresented train data records (or outliers) with perfect precision in all settings (see Fig. 1). Some models are more vulnerable: attacking graphical models (PrivBayes, MST) requires fewer rounds to achieve similar results than GANs. In fact, ReconSyn is successful even when attacking low-utility generators such as Random and Independent. **Main Contributions:** 1. We are the first to analyze the undesirable properties of the most common privacy metrics and filters used in industry to empirically “guarantee” the privacy of synthetic data. 2. We propose a novel reconstruction attack, ReconSyn, with minimal and realistic assumptions – black-box access to a single trained generative model and the privacy metrics. 3. We demonstrate that applying DP to the generative model does not mitigate ReconSyn when combined with unperturbed heuristics, as leakage persists through the metrics. 4. We show that using similarity-based privacy metrics does not provide GDPR compliance. 5. We discuss how, assuming a similar threat model, ReconSyn can be adapted to other attacks like membership and attribute inference. Overall, our work prompts the need to move away from attempts to guarantee privacy in an ad-hoc, empirical way. We believe our findings will be useful to practitioners when deploying solutions requiring the processing of sensitive data, as well as policymakers when creating standards and best practices for privacy-preserving synthetic data adoption. **NB:** We have shared our work with the relevant synthetic data companies in the spirit of responsible disclosure and are working with them on the next steps – see Ethics Statement. 2 PRIVACY METRICS AND FILTERS In this section, we present the three privacy metrics and two filters broadly used by synthetic data companies to guarantee privacy (see App. A). The former are used to measure the privacy of the synthetic data and run pass/fail statistical tests, while the latter remove records from the generated data points based on their similarity to train records or outliers. A common implementation pre-processing step, which, unless stated otherwise, we also follow, is to discretize the data. The input to all metrics is train, synthetic, and holdout test dataset, $D^n_{\text{test}}$ (with the same size, $n$, as the train data, $D^n_{\text{train}}$) – which comes from the same distribution as $D^n_{\text{train}}$ but was not used to train the generative model – to serve as a reference. **Intuition.** The overall idea behind similarity-based privacy metrics is that synthetic records should be as close as possible to train ones, but not too close, i.e., not closer than what would be expected from the holdout records (Platzer & Reutterer [2021]). More precisely, we compute the closest pairwise distances (for discrete data, we use Hamming distance, for continuous – Euclidean) for $(D^n_{\text{train}}, D^n_{\text{synth}})$ and $(D^n_{\text{train}}, D^n_{\text{test}})$, and run a pass/fail test. The passing criterion is a comparison between simple statistics calculated from the two distributions – the average or the 5th percentile, while the output of the metrics is the pass/fail flag alongside the actual statistics as per (MOSTLY AI [2022b]). If all three tests pass, the data becomes “truly anonymous synthetic data” (MOSTLY AI [2020]) and could be freely shared alongside the privacy scores. Otherwise, the synthetic data is not considered safe enough to be released. **Identical Match Share (IMS)** is a privacy metric that captures the proportion of identical copies between train and synthetic records. The test passes if that proportion is smaller or equal to the one between train and test datasets. In practice, IMS is used or advocated by (MOSTLY AI [2020], Syntegra [2021], DataCebio [2022]), and others (Lu et al. [2019], ONS DSC [2022], AWS [2022]). **Distance to Closest Records (DCR)** also compares the two sets of distances. It looks at the overall distribution of the distances to the nearest neighbors or closest records. The test passes if $(D^n_{\text{train}}, D^n_{\text{synth}})$-5th percentile is larger or equal than the other pair. DCR is supposed to protect against settings where the train data was just slightly perturbed or noised and presented as synthetic (Tonic [2023a], MOSTLY AI [2020], Hazy [2023b], Syntegra [2021], Statice [2023a]), and several scientific studies/blogposts (Park et al. [2018], Lu et al. [2019], Yale et al. [2019], Zhao et al. [2021], ONS DSC [2022], AWS [2022], Guillaumeux et al. [2023], Liu et al. [2023], Yoon et al. [2023]) use DCR. **Nearest Neighbor Distance Ratio (NNDR)** is very similar to DCR, but the nearest neighbors’ distances are divided by the distance to the second nearest neighbor. The idea is to add further protection for outliers by computing relative rather than absolute distances. NNDR, too, compares the 5th percentile between the two sets of distributions, and is used by (MOSTLY AI [2020]) and in academic papers (Zhao et al. [2021], Guillaumeux et al. [2023]). **Similarity Filter (SF)** is similar in spirit to the privacy metrics, but rather than just measuring similarity, it excludes or filters out individual synthetic data points if they are identical or too close to train ones. Essentially, SF aims to ensure that no synthetic record is overly similar to a train one. It is used by (Replica Analytics [2020], Gretel [2021], Synthesized [2023a]). **Outlier Filter (OF)** focuses on the outliers; it removes synthetic records that could be considered outliers with respect to the train data, and is used by (Gretel [2021]). **Passing Criteria.** Throughout our experiments, we adopt the criteria from (MOSTLY AI [2020]), unless stated otherwise, i.e., a synthetic dataset (for whose generation none of the filters were used) is considered private if all three privacy tests—coming from IMS, DCR, and NNDR—pass. **Additional Background Information.** We defer the related work on reconstruction in databases and machine learning to App. B. In App. C, we outline common notation on synthetic data and details about the generative models, datasets, and the criteria for defining outliers in our evaluation. 3 FUNDAMENTAL LIMITATIONS OF SIMILARITY-BASED PRIVACY METRICS In this section, we identify and discuss several issues with using similarity-based privacy metrics (SBPMs) to guarantee privacy through pass/fail tests. We later exploit these properties to build a successful reconstruction attack. **Issue 1: No Theoretical Guarantees.** First and foremost, SBPMs do not provide any theoretical or analytical guarantees. They do not define a threat model or a strategic adversary, thus ignoring some of the most fundamental security principles (Anderson, 2020). Instead, SBPMs rely on a number of arbitrarily chosen statistical tests. This prompts a few questions, e.g., why choose these specific tests instead of others? What exactly do they protect against? How were the passing criteria selected? Furthermore, SBPMs do not rule out vulnerabilities to current or future adversarial attacks, including ReconSyn (see Sec. 4). **Issue 2: Privacy as Binary Property.** SBPMs treat privacy leakage as a binary property, i.e., the synthetic data is either “truly” private or not. This is despite the fact that SBPMs do not rely on, e.g., an adversarial advantage that can be proven asymptotically small under certain assumptions, e.g., as done in Cryptography. In fact, using pass/fail tests removes analysts’ sense of direction and ability to measure privacy leakage across a continuous interval. This has two consequences. First, it is hard to know what choices (e.g., models, hyperparameters, etc.) contribute to making the synthetic data private. Second, releasing a single private synthetic dataset is deemed as safe as releasing many (as long as they pass the tests), even though this increases leakage since the provider needs to call the train/test data every time new data is generated. Arguably, this is related to the “Fundamental Law of Information Reconstruction” (Dwork & Roth, 2014), stating that overly accurate answers to too many questions will destroy privacy in a spectacular way. **Issue 3: Non-Contrastive Process.** SBPMs are computed in a non-contrastive way. That is, they do not compare the computations when an individual is included or not. Since there is no noise or randomness ingested into the process, plausible deniability is ruled out. Thus, calculating the privacy metrics leads to a variety of attacks, including simple ones like differencing attacks. For example, if an adversary makes two calls to the metrics, one with and one without a particular individual, they can deduce some information (e.g., whether the individual is an exact match or closer than 5th percentile) with 100% confidence since the computations will carry no uncertainty. **Issue 4: Lack of Worst-Case Analysis.** All SBPMs use simple statistics (average or 5th percentile) as passing criteria. This leaves room for maliciously crafted synthetic datasets that might pass the tests but still reveal sensitive data. Also, this does not protect against worst-case scenarios, i.e., memorization and replication of outliers, which, combined with the lack of plausible deniability (from Issue 3), increases the adversary’s chance of launching a successful attack. Unfortunately, using a held-out dataset for comparison does not alleviate the problem due to what is commonly and informally defined as the “Generalization Implies Privacy” fallacy, i.e., privacy is a worst-case problem while generalization is an average-case. Put simply, even if all tests pass, i.e., the model generalizes, memorization cannot be ruled out (Song et al., 2017). **Issue 5: Privacy as Data Property.** SBPMs expect a single synthetic dataset as input, which has several implications. First, it means we measure the privacy of a specific dataset and not the generative model/process. Therefore, privacy becomes a property of the data rather than the generating process. Also, SBPMs require running the metrics on each and every generated synthetic data in order to guarantee privacy which, unfortunately, actually leaks more privacy (as discussed in Issue 2). Second, the specific synthetic dataset may or may not be representative of the distribution captured by the model, which could lead to inconsistent results across generation runs. Typically, privacy is defined as a statistical property over many such instances. Due to space limitation, we present the remaining three limitations in App.D– incorrect interpretation, risk underestimation, and implementation challenges. We also illustrate the inconsistency and untrustworthiness of the metrics through counter-examples in App.E. **Take-Aways.** In summary, relying entirely on empirical evaluations to “guarantee” privacy present several critical weaknesses that may lead to an artificially high sense of security. Unfortunately, this approach is ineffective and embeds severe vulnerabilities to privacy attacks. Figure 2: Overview of ReconSyn. The provider 1. splits the real data into train/test, 2. fits a generative model on the train data, 3. generates synthetic data (privacy filters are applied), 4. runs the privacy metrics on the synthetic data. The adversary can make API calls (they have black-box access) to the fitted generative model and privacy metrics. They a. generate synthetic datasets, b. run them through the privacy metrics to observe the pass/fail tests and scores (if tests pass), c. reconstruct underrepresented train records (outliers) through SampleAttack and SearchAttack (introduced in Algorithm 1). 4 THE RECONSYP RECONSTRUCTION ATTACK We now introduce a novel attack, ReconSyn, aimed at recovering the outliers in the train data with minimal assumptions. An overview of the attack is reported in Fig. 2. Adversarial Model. A synthetic data provider has access to train and test datasets ($D_{train}^n$ and $D_{test}^n$), trains a generative model ($G_\theta(D_{train}^n)$), generates synthetic datasets ($D_{synth}^n$), and deems them private if they pass all privacy tests (a combination of privacy metrics and/or filters). We assume a strategic reconstruction adversary with black-box access to the trained generative model ($G_\theta(D_{train}^n)$) and the privacy metrics. The adversary has the capability to sample from the trained model to generate synthetic datasets. They can add or remove data points to/from the synthetic data and make calls to the metrics APIs to observe the outcome of the tests and, in case all tests pass, the scores. Their goal is to reconstruct, or completely violate the privacy of, the train data outliers ($D_{train}^{out}$) by building a collection of synthetic datasets considered private by the provider. Algorithm Steps. ReconSyn (pseudocode in Algorithm 1 in App. F) comprises two subattacks: 1) SampleAttack, which generates and evaluates samples drawn from the generative model, and 2) SearchAttack, which strategically examines the history of records generated in the first phase. Next, we offer an overview of the strategies and present more details in App. F. As a first step, the adversary uses OutliersLocator to identify regions with underrepresented records or outliers. This involves generating a large synthetic data sample, fitting a Gaussian Mixture model, and selecting the smallest isolated clusters. SampleAttack follows a simple procedure. In each round, it generates synthetic data, then identifies potential outliers using OutliersLocator. It removes data already examined in previous rounds, as recorded in its history. The attack then queries the metrics API to check for exact matches (if all tests pass) and adds all queried data to the history. Informally, the idea behind SearchAttack involves selecting close records from the history and ‘shaking’ or ‘fixing’ them one column at a time until an exact match is found. For a specific record, two steps are taken. We first identify columns that have not yet been reconstructed using its neighboring dataset, which is a square matrix where each row differs in a single column value. We then iteratively test possible values for these columns, filtering out records through OutliersLocator and the history. Ultimately, this leads to another match. Plausibility of the Attack. ReconSyn relies on three realistic assumptions, in that the adversary can: 1. Generate an unlimited amount of synthetic datasets, which is one of the main selling points for adopting synthetic data (Gretel [2023a], MOSTLY AI [2022a], Hazy [2022]). 2. Add or remove records – data augmentation is a popular use case advertised by synthetic data companies (Tonic [2022], Gretel [2023a], MOSTLY AI [2023c]). 3. Access the privacy tests and scores for every generation run (if all tests pass); again, this is explicitly offered by the main companies (MOSTLY AI [2022b, 2023a]). Note that the adversary does not have any side knowledge: no access to the train/test data or even possession of data from the same distribution, no background information of the used generative | Model | 2d Gauss Sample | Adult Small Sample | Adult Sample Search | Census Sample Search | MNIST Sample Search | |-----------|-----------------|--------------------|---------------------|----------------------|---------------------| | Oracle | 0.95 | | | | | | PrivBayes | 1.00 | 0.44 | 0.95 | 0.54 | 0.98 | 0.00 | 0.99 | | MST | 1.00 | 0.05 | 0.90 | 0.84 | 0.99 | 0.00 | 0.97 | | DPGAN | 0.96 | 0.02 | 0.78 | 0.15 | 0.82 | 0.00 | 0.97 | | PATE-GAN | 1.00 | 0.02 | 0.81 | 0.37 | 0.83 | 0.00 | 0.97 | | CTGAN | 0.99 | 0.00 | 0.80 | 0.74 | 0.90 | 0.00 | 0.80 | Table 1: Overview of the performance of the ReconSyn attack against different models ($\epsilon = \infty$) and datasets. approach, model, hyperparameters, nor model updates or gradients. They are also agnostic to the dataset type and the specific use case/downstream task. **Why Outliers.** Our motivation for targeting the underrepresented regions in the train data is their potential correspondence with the most vulnerable individuals. They are inherently more difficult to model accurately, which makes their reconstruction more challenging (see Sec. C). Furthermore, outliers are at a higher risk of being memorized by models (Feldman, 2020) and are more susceptible to membership inference attacks (Stadler et al., 2022). Regulators, such as Information Commissioner’s Office (2022), have explicitly highlighted the increased sensitivity of outliers. **Why Reconstruction.** We choose to build a reconstruction attack as this is one of the most powerful attacks – it exposes all (sensitive) attributes – thus unequivocally demonstrating the untrustworthiness of similarity-based approaches to reason about privacy. If the attack is successful in reconstructing even a handful of train outliers with high precision, this will constitute a serious privacy violation (Carlini et al., 2022). In fact, reconstruction implies the ability to single individuals out and enable their identification or link them to the real data. This, in turn, means that the process of generating synthetic data and guaranteeing its privacy has failed at least two of the three privacy guarantees outlined by European Commission Article 29 Working Party (A29WP, 2014), namely, singling out and linkability. Therefore, the process cannot be considered anonymous as per GDPR. **Take-Aways.** ReconSyn is powerful and generalizable since it achieves both high recall and precision (see Sec. 5.1). Precision is perfect as we reconstruct outliers with 100% confidence (i.e., there are no false positives). Furthermore, assuming the same setup, other attacks such as membership and attribute inference could be considered specific subcases of ReconSyn (see App. F). ## 5 Evaluation In this section, we demonstrate that ReconSyn successfully recovers the train outliers in different settings. Our experiments are conducted against the models and datasets reviewed in App. C. ### 5.1 Reconstruction of Train Outliers We measure the performance of ReconSyn (SampleAttack and SearchAttack) against PrivBayes, MST, DPGAN, PATE-GAN, and CTGAN on increasingly more complex datasets (2d Gauss, Adult Small, Adult, Census, MNIST). Our experiments are summarized in Table 1. Since ReconSyn is highly successful in all settings, we do not report the utility of the generated synthetic data. #### 5.1.1 ReconSyn, SampleAttack We launch SampleAttack on all five datasets. We run it for 1,000 rounds for the first three datasets and for 5,000 rounds for the last two. The attack exhibits mixed results: regardless of the target model, it is very successful on 2d Gauss and Adult Small, reconstructing at least 95% of train outliers, but struggles for the remaining three (see Table 1). **2d Gauss.** Starting with 2d Gauss, we attack the oracle and display the results in Fig. [10]. Even though i) no generative model has been exposed to train data, and ii) the oracle has no memory of the synthetic data it has generated, SampleAttack manages to perfectly reconstruct 95% train outliers due to the privacy metrics’ leakage. In other words, if the adversary had no access to the metrics, they would not be able to gain information about the train data by generating new data. Figure 3: Proportion of reconstructed train outliers for increasing rounds by SampleAttack, Adult Small, Adult, and Census. **Adult Small.** For Adult Small, we use SampleAttack against all five generative models and report the number of reconstructed outliers in Fig. 3 (top five lines). For PrivBayes, MST, and PATE-GAN, the attack quickly reconstructs around 90% outliers after just 10 rounds and eventually reaches 100%. For DPGAN and CTGAN, the attack plateaus at around 85% after 40 rounds, but by round 1,000, it slowly improves and achieves 96% and 99%, respectively. We believe that SampleAttack is extremely successful on this dataset because its domain is relatively small ($10^5$). **Adult.** On Adult, which has twice the number of columns and cardinality of $10^{15}$, the models are much less likely to memorize and reproduce individual data points. Indeed, apart from PrivBayes, SampleAttack only recovers 5% of the outliers – see Table 1 and Fig. 3 (bottom five lines). **Census.** Even though Census has roughly twice the columns/rows and much higher cardinality, SampleAttack is more successful (excluding PrivBayes on Adult), recovering on average 53% outliers (see middle lines in Fig. 3). Interestingly, attacking CTGAN yields better results than PrivBayes. Also, the recovery rate follows a linear trend (vs. logarithmic for Adult Small). **MNIST.** Finally, looking at MNIST which has even higher dimensionality/cardinality, SampleAttack fails completely and does not reconstruct even a single outlier. In fact, in Fig. 4, we see that CTGAN and PrivBayes generate images with the highest similarity to the real ones but still at a Hamming distance of at least 6 (i.e., number of different pixels). All models, however, create outliers further away from the train data compared to the distances between test and train. This is a confirmation that all privacy tests pass and test/synthetic datasets do not contain copies of the train data. ### 5.1.2 ReconSyn, SearchAttack We run the follow-up SearchAttack on all models where SampleAttack achieves less than 95% reconstruction success, i.e., we attack all five models on Adult, Census, and MNIST. We run SearchAttack for up to 1/4 of the distances; in other words, we go through the history and try to “fix” at most 4 columns of Adult/Census and 16 of MNIST. While widening the search to broader distances could lead to better results, we put the efficiency of our attack to the test by limiting the computations. In all cases, we manage to successfully reconstruct over 78% of all train outliers; see Table 1. **Adult.** For Adult, SearchAttack easily recovers the majority of train outliers, between 78%-95%. The attack is both more effective and efficient on the graphical models since it does not need to go back far in history—only a couple of distances (or columns). Most likely, this is due to: 1) SampleAttack was already more successful for these two models, generating more diverse data history, and 2) graphical models tend to outperform GANs on low-dimensional datasets like Adult and simple downstream tasks like marginal preservation [Ganev et al., 2023]. Nonetheless, SampleAttack reconstructs most outliers against the GAN models, too, even though it requires searching further back (distance of 4). **Census.** SearchAttack’s performance on Census is similar—it reconstructs more outliers vs. the graphical models (99%) than the GANs (85% on average). Even though attacking PrivBayes starts at a disadvantage compared to CTGAN, SearchAttack manages to recover more outliers. As before, this could be because PrivBayes generates richer history and potential overfitting of CTGAN. **MNIST.** As for MNIST, despite the large data cardinality, SearchAttack reconstructs more than 80% of the train outliers. To reduce the search space, the adversary can be strategic, e.g., excluding some pixels (i.e., the ones on the sides of the image or the “frame”) by setting their value to 0 after observing the common pattern after generating a collection of potential outliers. This way, the adversary is restricted and cannot fully reconstruct 21 out of the 488 outliers. Nonetheless, this could be considered a good trade-off vis-à-vis the number of saved computations – specifically, a factor of \( \approx 480 = 30 \cdot 16 \) (30 fixed pixels, 16 combinations per bin) per search. We report the number of exactly reconstructed train outliers and those within 1 pixel (even though the adversary can easily get an exact train data match by running the attack for one step without any restrictions). A subset of the recovered digits for all models is shown in Fig. 5. Overall, SearchAttack is very successful, despite SampleAttack’s failure to recover any outliers – aside from CTGAN, attacking all other models results in reconstructing at least 97% of outliers. This might be because the generators manage to create diverse synthetic images not too dissimilar from the outliers (as already shown in Fig. 4). Conversely, even though CTGAN generates the closest images, that does not result in recovering more outliers. Potentially, this could be due to a mode collapse or the model’s specific strategy of embedding categorical columns (both DPGAN and PATE-GAN use simple one-hot encoding). Unsurprisingly, out of the restricted 21 outliers, the adversary attacking CTGAN manages to recover only 6 compared to at least 16 for the other models. ### 5.1.3 Take-Aways Our novel attack, ReconSyn, successfully reconstructs at least 78% of the train outliers with all tested models and datasets. SearchAttack performs better on lower-dimensional datasets but fails to recover any records for MNIST. However, the follow-up SearchAttack achieves an average of 90% success on the wider datasets and is slightly more successful when launched against graphical models. ### 5.2 DP and Low Utility Generative Models #### 5.2.1 DP Generative Models We now assess whether training the generative models with DP guarantees can prevent or minimize the performance of ReconSyn. We simulate company product deployments which combine DP training with unrestricted metric access to the train data (see App. A). We experiment with the 4 models relying on different mechanisms – namely, Laplace, Gaussian, DP-SGD, and PATE – while varying the privacy budget in the range \{∞, 1, 0.1\} on Adult Small. We keep \( \delta \) constant to \( 1/n \). Again, we launch SampleAttack (1,000 rounds) on all models and SearchAttack (up to 1 column) in the cases where the former fails to achieve at least 95% reconstruction success. We report the privacy-utility trade-off in Fig. 6. Regardless of the attacked model, applied privacy budget, or achieved utility, ReconSyn is successful at recovering more than 95% of its targets (note the dashed vertical line). **Utility Evaluation.** Utility is measured through the lenses of similarity, aiming to be consistent with other studies (Tao et al., 2022). More precisely, we report a single similarity score between train and synthetic data by calculating all 1-way marginals and 2-way mutual information scores (normalized between 0 and 1) and averaging them. As expected, applying DP generally reduces utility. Breaking down the effect on the models of the same type, we see that MST’s drop is much lower than for PrivBayes. The same occurs for PATE-GAN compared to DPGAN. This is due to the specific DP mechanisms used by the different models, as studied by (Ganev et al., 2023). **Privacy Evaluation.** Privacy is expressed as the performance of ReconSyn in terms of the proportion of reconstructed train outliers. Applying DP to the models with higher utility (MST and PATE- GAN) does not even defend against SampleAttack. Although applying DP does protect against SampleAttack for PrivBayes and DPGAN, this comes with a big drop in utility (as discussed above). Nonetheless, SearchAttack recovers all train outliers against these two models too. Even though DP does not help, this does not mean that DP does not work. In fact, in this context, the leakage comes from the privacy metrics; as they require access to the train data and are deterministic (as discussed in Issue 3 in Sec. 3), they break the end-to-end DP pipeline. No matter what other privacy mechanism is added on top of the metrics, it is unlikely to mitigate the problem. 5.2.2 Low Utility Generative Models Next, we demonstrate that ReconSyn is successful even when attacking generative models with severely restricted capabilities, such as Independent and Random, on Adult Small. Since neither model has the ability to model the data well, the adversary would not be able to locate the clusters with outliers through OutliersLocator. Instead, we set their goal to reconstruct any train data points. Keeping the same settings, we launch SampleAttack for 1,000 rounds. Evaluation. The adversary manages to recover around 79% of the train data against both models. Unsurprisingly, the recovery rate on Random is much slower than Independent, i.e., the adversary needs more rounds to achieve comparable results. Incidentally, in both cases, the adversary reconstructs all 192 train outliers. This could be due to the small data support and randomness component, as both models have a higher chance of generating data points with low probability compared to the five main models, which learn to generate realistic data better. If the adversary successfully reconstructs a large proportion of the train data points, they could use them to fit OutliersLocator and locate the outliers as a last step. 5.2.3 Take-Aways Attacking models trained with DP guarantees (even with $\epsilon = 0.1$) or models with low utility (Independent and Random) does not mitigate ReconSyn. In fact, in all cases, the attack manages to reconstruct more than 95% of the train outliers due to access to the privacy metrics. 6 Discussion and Conclusion This paper presents the first in-depth analysis of the most common similarity-based privacy metrics (SBPMs) used in the synthetic data industry. We empirically demonstrate their shortcomings by building ReconSyn, a novel reconstruction attack that successfully reconstructs most train outliers. Our work proves that reasoning about privacy in the context of synthetic data purely through empirical evaluation and SBPMs is inadequate. Worse yet, we show that the privacy metrics/filters commonly used by leading commercial actors are unreliable and inconsistent. The effectiveness of ReconSyn, consistently demonstrates that meaningful privacy protections are often inexistent even if all privacy tests pass. In particular, ReconSyn is successful even when attacking low-utility generators and models with DP guarantees due to the severe information leakage coming from the access to the metrics. In all cases, we can completely reconstruct and thus single out and link to most outliers, failing two of the required GDPR privacy guarantees. As a result, synthetic data whose privacy is guaranteed through SBPMs cannot be considered anonymous. Broadly, we can compare providing privacy through SBPMs to the privacy guarantees of the Diffix system, which are often insufficient (Pyrgelis et al., 2018; Gadotti et al., 2019; Cohen & Nissim, 2020a). Even though the functionalities are different – the former return synthetic data and statistical pass/fail tests and scores, the latter answers to queries – both allow for an unlimited number of queries while not implementing robust privacy mechanisms like DP, ultimately leading to severe privacy violations. We argue that it is crucial for practitioners to prioritize privacy concerns and rely on established notions of privacy from the academic community to avoid potential catastrophic outcomes. (In App. I, we include further discussion on DP and future research directions.) Ethics Statement. Our goal is not to undermine companies’ products but to demonstrate how essential it is to emphasize privacy considerations and rely on established academic notions of privacy when deploying real-world systems. Even though we are not directly attacking deployed systems or accessing/processing any personal data, we shared our work with the two main synthetic data companies using SBPMs (Gretel and MOSTLY AI) in the spirit of responsible disclosure. We have provided them with more than 90 days for a response per Google Project Zero’s recommendations and offered to keep the paper confidential until submission. As of September 28, 2023, they have responded to our notice, and we are currently working with them on the next steps. Reproducibility Statement. We make considerable efforts to make our work reproducible. First, we clearly state all of our assumptions throughout the paper. Second, we provide references and step-by-step explanations of how we accessed and prepared the datasets and generative models used in our evaluation. Third, we include a detailed description and pseudocode of our new attack. Last, we intend to share the code with the reviewers/ACs during the discussion period and eventually publicly (once the paper is published). REFERENCES A29WP. Opinion on anonymisation techniques. https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2014/wp216_en.pdf, 2014. Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In ACM CCS, 2016. Nazmiye Ceren Abay, Yan Zhou, Murat Kantarcioglu, Bhavani Thuraisingham, and Latanya Sweeney. Privacy preserving synthetic data release using deep learning. In ECML PKDD, 2018. Gergely Acs, Luca Melis, Claude Castelluccia, and Emiliano De Cristofaro. Differentially private mixture of generative neural networks. IEEE TKDE, 2018. Ross Anderson. Security engineering: a guide to building dependable distributed systems. John Wiley & Sons, 2020. Meenatchi Sundaram Muthu Selva Annamalai, Andrea Gadotti, and Luc Rocher. A linear reconstruction approach for attribute inference attacks against synthetic data. arXiv:2301.10053, 2023. AWS. How to evaluate the quality of the synthetic data – measuring from the perspective of fidelity, utility, and privacy. https://aws.amazon.com/blogs/machine-learning/how-to-evaluate-the-quality-of-the-synthetic-data-measuring-from-the-perspective-of-fidelity-utility-and-privacy/, 2022. Sergul Aydore, William Brown, Michael Kearns, Krishnaram Kenthapadi, Luca Melis, Aaron Roth, and Ankit A Siva. Differentially private query release through adaptive projection. In ICML, 2021. Eugene Bagdasaryan, Omid Poursaeed, and Vitaly Shmatikov. Differential privacy has disparate impact on model accuracy. In NeurIPS, 2019. Borja Balle, Giovanni Cherubin, and Jamie Hayes. Reconstructing training data with informed adversaries. In IEEE S&P, 2022. Gary Benedetto, Jordan C Stanley, Evan Totty, et al. The creation and use of the SIPP synthetic Beta v7. 0. US Census Bureau, 2018. Kuntai Cai, Xiaoyu Lei, Jianxin Wei, and Xiaokui Xiao. Data synthesis via differentially private markov random fields. PVLDB, 2021. Nicholas Carlini, Úlfar Erlingsson, and Nicolas Papernot. Distribution density, tails, and outliers in machine learning: Metrics and applications. arXiv:1910.13427, 2019a. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In USENIX Security, 2019b. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel. Extracting training data from large language models. In USENIX Security, 2021. Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer. Membership inference attacks from first principles. In IEEE S&P, 2022. Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, and Eric Wallace. Extracting training data from diffusion models. arXiv:2301.13188, 2023.
yTbAGlu4jR
Section 4.3 discusses the integration of an 'ELBO decomposition trick' into the method, which contributes to the final loss function. The specific advantages of incorporating this technique, particularly in the context of addressing limited overlap issues, have not been fully articulated. What incremental value does this approach provide, and how does it interact with the other components of the loss function, namely the prognostic score-based and balancing score-based losses?
Learning Identifiable Balanced Prognostic Score for Treatment Effect Estimation Under Limited Overlap Anonymous authors Paper under double-blind review Abstract Understanding individual-level treatment effects is a fundamental and crucial problem in causal inference. In this paper, our objective is to tackle the issue of limited overlap, where certain covariates only exist in a single treatment group. We demonstrate that, under weak conditions, it is possible to simultaneously recover identifiable balanced prognostic scores and balancing scores. By leveraging these scores, we relax the requirement of overlapping conditions in a latent space, enabling us to generalize beyond overlapped regions. This approach also allows us to handle out-of-distribution treatments with no overlap. Additionally, our approach is adaptable to various tasks, including both binary and structured treatment settings. Empirical results on different benchmarks demonstrate that our method achieves state-of-the-art performance. 1 Introduction Treatment effect estimation plays a vital role in fields that require accurate decision making, such as medicine (Grzybowski et al., 2003), economics (Athey & Imbens, 2017), and education (Davies et al., 2018). The fundamental problem of causal inference (Holland, 1986) is that we can never observe the missing counterfactuals. Randomized control trials obviate these issues through randomization, but can be at times expensive (Sibbald & Roland, 1998) and impractical (Deaton & Cartwright, 2018). Therefore, deriving precise individual-level treatment effect from observational data is important and highly valuable. The central challenge in causal inference from observational data is selection bias (Imbens & Rubin, 2015), where the distributions between treatment arms are different, i.e., \( p(t|x) \neq p(t) \). Previous studies have primarily focused on selection bias resulting from confounding variables, which are variables that causally affect both the treatment and outcome, and have relied on the unconfoundedness assumption (Rosenbaum & Rubin, 1983). However, instruments, which are covariates that causally affect only the treatment, can also introduce selection bias (Hassanpour & Greiner, 2019). As we include more covariates that could potentially act as confounders or instruments, it becomes increasingly challenging to satisfy the requirement of overlapping support among treatments. Furthermore, in real-world scenarios, the treatment selection mechanism \( p(t|x) \) that leads to selection bias can inherently lack overlap. For instance, a cautious doctor might not perform surgeries on elderly patients in all cases, making it difficult to generalize to surgical treatments for the elderly. As Pearl (2009) states, “Whereas in traditional learning tasks we attempt to generalize from one set of instances to another, the causal modeling task is to generalize from behavior under one set of conditions to behavior under another set.” In the case of limited overlap, the causal model needs to generalize to previously unadministered treatments, which can even be completely different, and this challenge frequently arises in structured settings (Ramsundar et al., 2019). Previous approaches aimed at mitigating selection bias often assume unconfoundedness and overlook the issue of limited overlap. Reweighting-based methods (Farrell, 2015; Gretton et al., 2009) typically rely on the presence of common support between the treatment and control groups to adjust for distribution mismatch. Subsequently, there has been an increasing interest in balanced representation learning since Johansson et al. (2016). However, most of these methods primarily tackle selection bias and do not explicitly consider the problem of limited overlap. Wu & Fukumizu stands as a pioneering work that considers limited overlap in the within-sample setting by learning an entangled prognostic score (Hansen, 2008). To effectively address selection bias, including the potential challenge of limited overlap, we employ a latent identifiable generative model (Khemakhem et al., 2020) that simultaneously learns identifiable balancing score and balanced prognostic score by disentangling $X$. Identifiable balancing score is naturally obtained by concatenating identifiable instruments and confounders, while identifiable balanced prognostic score is obtained by concatenating identifiable confounders and adjustments. Intuitively, modeling identifiable balancing score helps us identify the root cause of selection bias, while modeling identifiable balanced prognostic score enables us to directly estimate the outcome by leveraging the learned identifiable disentangled representation that are direct causes of the outcome $Y$. Our contributions can be summarized as follows: i) We demonstrate that, under weak conditions, it is possible to simultaneously recover the identifiable balanced prognostic score and balancing score. Furthermore, we provide theoretical results on how a balanced prognostic score effectively handle the limited overlap problem. ii) We introduce a practical and generalized disentanglement method called Disentangled Identifiable vaRational autoEncoder (DIRÉ). This method is designed to model the data generation process with identifiability guarantee. iii) We apply our method to both binary and structured treatment settings. Notably, we demonstrate how an identifiable balanced prognostic score can generalize to out-of-distribution treatments with zero overlap, showcasing its robustness. iv) Through comprehensive experiments, we demonstrate that our method outperforms other state-of-the-art models in their respective settings. This superiority is evident in both the widely-used de facto binary treatment benchmark and various limited overlapping synthetic datasets. Synthetic datasets, along with code, will be made publicly available upon publication. 2 RELATED WORK There are two main approaches to addressing selection bias. One approach involves sample reweighting to align different distributions. A common method within this approach is to use propensity scores for inverse weighting of samples (Rosenbaum & Rubin, 1983; Austin, 2011; Allan et al., 2020; Freedman & Berk, 2008). However, weighting based on propensity scores can be unstable and lead to high variance (Swaminathan & Joachims, 2015). To address this issue, researchers have proposed more stable weighting methods. For instance, Gretton et al. (2009) reweights samples to achieve distribution matching in a high dimensional feature space, while Zubizarreta (2015) learns weight that minimizes variance and balances distributions simultaneously. Athey et al. (2018) combines sample reweighting and regression adjustments through approximate residual balancing, offering the benefits of both approaches. Ever since Johansson et al. (2016), there has been growing interest in mitigating selection bias via minimizing distribution discrepancy (Mansour et al., 2009) of learned representations (Bengio et al., 2013). Shalit et al. (2017) improve upon Johansson et al. (2016)'s work by learning treatment-specific function on top of a prognostic score (Hansen, 2008), so that the treatment bit does not get lost in the distribution alignment stage. Hassanpour & Greiner (2019b) proposes learning disentangled representations to clearly identify factors that contribute to either the treatment $T$, the outcome $Y$, or both, in order to better account for selection bias and achieve improved result. Wu & Fukumizu (2021) provides identification guarantee in within-sample setting, learning a prognostic score whose dimension is not higher than that of the outcome $Y$. In our work, we aim to learn disentangled representation with causal generative process that adheres to the independent causal mechanism (Schölkopf et al., 2021). Disentangled representation is preferred because, unlike entangled representations, it allows for sparse or localized changes in the causal factors when the distribution undergoes interventions (Schölkopf et al., 2021), making our model more robust to such changes. Several approaches have been proposed to address the limited overlapping problem. Crump et al. (2009) suggests using optimal sub-samples to estimate the average treatment effect. Grzybowski et al. (2003) excludes patients whose propensity scores cannot be matched. Jesson et al. (2020) focuses on identifying the limited overlapping regions without providing estimations. Oberst et al. (2020) provides an interpretable characterization of the distributional overlap between treatment groups. 3 PRELIMINARIES Our objective is to estimate \( \mathbb{E}[Y(t)|X] \) for all \( x \in \mathcal{X} \) and \( t \in \mathcal{T} \), where \( x_i, t_i, y_i \) represents our dataset with \( x_i \) as the observed covariates, \( t_i \) as the administered treatment, and \( y_i \) as the corresponding outcome. This estimation allows us to accurately assess \( \mathbb{E}[Y(t_i) - Y(t_j)|X] \) for all \( t_i, t_j \in \mathcal{T} \) and \( x \in \mathcal{X} \). Here, \( Y(t) \) refers to the potential outcome, representing the hidden value that would have been observed if \( T = t \) was administered. By applying the backdoor criterion (Pearl, 2009) to the causal graph depicted in Figure 1, we can identify the individual-level treatment effect once we recover \( Z_2 \) and \( Z_1 \). We adopt the generalized definition of overlapping condition from Wu & Fukumizu (2021): **Definition 1** \( V \) is overlapping if \( P(T|V = v) > 0 \) for any \( t \in \mathcal{T}, v \in \mathcal{V} \). If the condition is violated at some value \( v \), then \( v \) is non-overlapping and \( V \) is limited-overlapping. As such, to accurately estimate the treatment effect, it is preferable to obtain a lower-dimensional representation (Bengio et al., 2013) that exhibits overlap, even if the original covariate space is limited overlapping. We adapt Wu & Fukumizu (2021)’s definition of prognostic score (Hansen, 2008) to accommodate for multiple treatments: **Definition 2** A prognostic score (PGS) is \( \{p(X,t)\}_{t \in \mathcal{T}} \), such that \( Y(t) \perp\!\!\!\perp X | p(X,t) \), where \( p(X,t) \) is a function defined on \( \mathcal{X} \times \mathcal{T} \). A PGS is called Balanced Prognostic Score (bPGS) if \( p(x,t_i) = p(x,t_j) \) for all \( t_i, t_j \in \mathcal{T} \). Since the prognostic score serves as a sufficient statistic for the outcome \( Y \), it is only necessary to fulfill the overlapping condition over prognostic scores, rather than over the covariates themselves. Intuitively, requiring overlap over all covariates may be overly strict, as some of them may be generated by underlying instrumental latent factors and therefore irrelevant for estimating the outcome. We will demonstrate this in a mathematically rigorous manner later on. 4 METHODOLOGY In this section, we offer a comprehensive introduction to our method. We begin by presenting the assumptions of the data generating process in Sec. 4.1. Following that, in Sec. 4.2, we demonstrate how a balanced prognostic score tackles the issue of limited overlap. Finally, in Sec. 4.3, we present our model architecture that offers identifiability guarantee and provide a concise overview of its implementation. 4.1 DATA GENERATING PROCESS AND SETUP We assume that the Data Generating Process (DGP) follows the causal graph presented in Fig. 1(a). In this graph, the covariate \( X \) is generated from three latent variables: \( Z_1 \) (adjustment variable), \( Z_2 \) (confounder variable), and \( Z_3 \) (instrumental variable). The outcome \( Y \) is generated by \( Z_1 \) and \( Z_2 \), while the treatment \( T \) is generated by \( Z_2 \) and \( Z_3 \). Mathematically, the DGP assumptions can be formulated as follows: **Assumption 4.1** (DGP for covariates) The covariates are generated from underlying ground-truth latent code \( Z_1 \) (adjustment variable), \( Z_2 \) (confounder variable), \( Z_3 \) (instrumental variable), where \[ X = \tilde{K}(Z_1, Z_2, Z_3) = K_1(Z_1) \oplus K_2(Z_2) \oplus K_3(Z_3) \oplus K_4(Z_1, Z_2) \oplus K_5(Z_1, Z_3) \\ \oplus K_6(Z_2, Z_3) \oplus K_7(Z_1, Z_2, Z_3) + e_1. \] (1) In DIRE, we intend to model \( \tilde{Z}_1, \tilde{Z}_2 \) and \( \tilde{Z}_3 \), and the data generating process \( \tilde{K} \): \[ X = K(Z_1, Z_2, Z_3) = K_1(Z_1) \oplus K_2(Z_2) \oplus K_3(Z_3) \oplus K_4(Z_1, Z_2) \oplus K_5(Z_1, Z_3) \\ \oplus K_6(Z_2, Z_3) \oplus K_7(Z_1, Z_2, Z_3) + \epsilon_1. \] (2) where \( \oplus \) denotes dimension concatenation. The random variables and mappings denoted by \( \tilde{} \) represent the ground-truth latent factors and mapping, while those without the symbol represent the learned parameters. Consistent with the works of Wu & Fukumizu (2021) and Khemakhem et al. (2020), we assume \( K_1-K_7 \) to be injective. Assumption 4.2 (DGP for Y) The outcome is generated from underlying ground-truth latent code \( \tilde{Z}_1, \tilde{Z}_2 \): \[ Y = \tilde{J}(\tilde{Z}_1, \tilde{Z}_2, T) = \tilde{j}_t(\tilde{Z}_1, \tilde{Z}_2) + e_2 = \tilde{j}_t \circ p + e_2, \] where the second equality is obtained through application of do-calculus (Pearl, 2009) in Fig. 1 and has been shown in Zhang et al. (2021). This is essentially a relaxation of assumption (G1') in Wu & Fukumizu (2021) without assuming \( j_t \) being injective. Similarly, we have: \[ Y = J(Z_1, Z_2, T) = j_t(Z_1, Z_2) + e_2. \] Assumption 4.3 (DGP for T) The treatment is generated from underlying ground-truth latent code \( Z_2, Z_3 \), where \[ T = \tilde{M}(\tilde{Z}_2, \tilde{Z}_3) + e_3, \] and \[ T = M(Z_2, Z_3) + e_3. \] This assumption is just a mathematical formulation of directed edges \((Z_3, T)\) and \((Z_2, T)\) in Fig. 1. Finally, inspired by Kaddour et al. (2021), we make the following assumption: Assumption 4.4 (Product effect for prognostic score) \( \forall p \in \{p(x, t)\}, p \) can be factorized as: \[ p = (g_1(X)^T h_1(T), g_2(X)^T h_2(T), \ldots, g_n(X)^T h_n(T)) + \epsilon, \] \[ = (g_1(X), \ldots, g_n(X)) \begin{bmatrix} h_1(T) & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & h_n(T) \end{bmatrix} + \epsilon, \] where there exists Reproducing Kernel Hilbert Space \( H_X \) and \( H_T \) such that \( g_i(X) \in H_X \) and \( h_i(T) \in H_T \) for \( 1 \leq i \leq n \). This assumption is considered mild, as highlighted in Kaddour et al. (2021). Subsequently, we will explore the universality of this assumption and demonstrate the relationship between prognostic score (PGS) and balanced prognostic score (bPGS) under this assumption. 4.2 IDENTIFICATIONS UNDER LIMITED-OVERLAPPING COVARIATE Limited overlap is a common occurrence in treatment effect estimation scenarios that involve high-dimensional covariates and multiple potential treatments. In this subsection, we initially illustrate how the requirement for overlap can be relaxed within a latent space. Furthermore, we demonstrate how the presence of an identifiable balanced prognostic score (bPGS) enables us to extend our generalization beyond regions of overlap. We first establish the generality of Assumption 4.4, and how we can derive a balanced prognostic score using a prognostic score. Proposition 1 (Universality of product effect formalization for prognostic score) Let $\mathcal{H}_{\mathcal{X} \times \mathcal{T}}$ be the given Reproducing Kernel Hilbert Space. For any $\epsilon > 0$ and any $f \in \mathcal{H}^n$, there is a $d \in \mathbb{N}$ such that there exist $2n$ d-dimensional function $g_i : \mathcal{X} \rightarrow \mathbb{R}^d$ and $h_i : \mathcal{T} \rightarrow \mathbb{R}^d$ such that $\|f - (g_1^T h_1, \ldots, g_n^T h_n)\|_{L_2(\mathcal{P}_{\mathcal{X} \times \mathcal{T}})} \leq \epsilon$. Thus, when provided with a prognostic score (PGS) $p_t \in p(x,t)$, we can always derive a balanced prognostic score (bPGS) $(g_1(X), \ldots, g_n(X))$. Referring to Fig. 1, we can interpret the learning of the bPGS as the inverse mapping of the generative process for the covariates $\mathcal{X}$. In other words, our model is inclined to acquire a more general bPGS, rather than just a PGS, which can be utilized for the downstream CATE task. In the following theorem, we show how learning bPGS enable us to relax the overlapping condition, and how bPGS enable us to generalize beyond non-overlapping regions, which frequently occurs in multiple and structured treatment setting. Theorem 1 Suppose Assumption 4.1 - Assumption 4.4 hold. Furthermore, $\tilde{K}_i$ and $K_i$ are injective for all $i$. Then if $\mathbb{E}_{p_\theta}[X|Z_1, Z_2, Z_3] = \mathbb{E}[X|\tilde{Z}_1, \tilde{Z}_2, \tilde{Z}_3]$, we have: 1. (Recovery of latent code) If either 1) $\tilde{K}_1, \tilde{K}_2$ and $\tilde{K}_3$ are not empty mapping, or 2) at least two of $\tilde{K}_4-\tilde{K}_7$ are non-empty mappings, $I(\Delta_T \tilde{Z}_1; T) = 0$, $I(\Delta_Y \tilde{Z}_3; Y|T) = 0$ for some injective $\Delta_T$ and $\Delta_Y$, $I(Z_2; T) \neq 0$ and $I(Z_2; Y) \neq 0$, then $Z_1 = \Delta_1 \circ \tilde{Z}_1$, $Z_2 = \Delta_2 \circ \tilde{Z}_2$, $Z_3 = \Delta_3 \circ \tilde{Z}_3$ for some injective mapping $\Delta_1$, $\Delta_2$, $\Delta_3$. 2. (Recovery of bPGS via subset of covariates) $Z = Z_1 \oplus Z_2 = v \circ p$ for some injective mapping $v$. Moreover, the overlapping condition can be relaxed onto $X' \subseteq X$ where where $X' := \{x \in X|k_4^{-1}(x) \text{ is overlapping}\} \cup \{x \in X|k_5^{-1}(x) \text{ and } k_6^{-1}(x) \text{ is overlapping}\} \cup \{x \in X|k_7^{-1}(x) \text{ is overlapping}\}$. 3. (OOD generalization on non-overlapping regions) Suppose $\tilde{f}_t(x) = \mathbb{E}[Y|X,T] = E_{p_\theta}[Y|X,T] = f_t(x)$ for all observed $(x,t) \in \mathcal{X} \times \mathcal{T}$. Suppose $\exists t' \in \mathcal{T}$ s.t. $\tilde{j}_t'$ and $\tilde{j}_t'$ are injective. Suppose there exist a RKHS $\mathcal{H}_P$ on the bPGS space, also $\tilde{j}_t^* \in \mathcal{H}_P$ and $\tilde{j}_t^* \circ \Delta \in \mathcal{H}_P$ for all $t^* \in \mathcal{T}$ where $\Delta := j_t'^{-1} \circ \tilde{j}_t'$. Then we have $||j_t \circ \Delta - \tilde{j}_t|| < \epsilon \Rightarrow |\tilde{f}_t(x) - f_t(x)| < \epsilon * C$ for some constant $C$ for all $t \in \mathcal{T}$. According to Theorem 1, the requirement for overlap can be relaxed to the variables $Z_1$ and $Z_2$. Furthermore, the acquisition of a balanced prognostic score (bPGS) allows for generalization to limited overlapping regions, as long as $j_t$ can be recovered. In our structured treatment setting, we empirically demonstrate that our recovered bPGS enables generalization even to out-of-distribution $j_t$ values with zero overlap, highlighting the advantages of learning an identifiable balanced prognostic score. 4.3 Model Architecture and Implementation To recover the underlying instrumental variables, confounding variables, and adjustment variables, we propose a method named Disentangled Identifiable Variational autoEncoder (DIRE) to reconstruct the covariates. In DIRE, we leverage treatment and outcome information as auxiliary supervision signals to guide the learning process and recover the identifiable latent factors. This process is illustrated in Fig. 1(b). Put more formally, Let $\theta = (f,g,T,\lambda)$ be parameters of the following generative model: $$p_\theta(x,z_1,z_2,z_3,z_4|t,y) = p_{T,\lambda}(z_1|y)p_{T,\lambda}(z_2|t,y)p_{T,\lambda}(z_3|t)p_g(z_4|z_1,z_2,z_3)p_f(x|z_4),$$ where we assume: $$p_\epsilon(x-f \circ g(z_1,z_2,z_3)) = p_f(x|z_4)p_g(z_1,z_2,z_3),$$ $$p_{T,\lambda}(z_1,z_2,z_3|t,y) = p_{T,\lambda}(z_1|y)p_{T,\lambda}(z_2|t,y)p_{T,\lambda}(z_3|t),$$ where in Eq. 10 $f$ and $g$ are injective, and in Eq. 11 we are requiring the generative process to be consistent with our causal model. The graphical model of decoder is shown in Fig. 1. The corresponding inference model factorizes as: \[ q_\phi(z_1, z_2, z_3, z_4 | x, t, y) = q_\phi(z_4 | x)q_\phi(z_1 | x, t)q_\phi(z_2 | z_4)q_\phi(z_3 | z_4, y). \] (12) Incorporating the ELBO decomposition trick (Chen et al., 2018) to better isolate the irrelevant factors from \( X \) from the latent factors of interest, we have **Theorem 2** The ELBO of DIRE is \[ \begin{align*} &\mathbb{E}_{p(x)p(t|x)p(y|t,x)}[p_\theta(x|t,y)] \\ &\geq \mathbb{E}_{p(x,t,y)q_\phi(z_4 | x)}[\log p_\theta(x|z_4)] + \mathbb{E}_{q_\phi(z_1,z_2,z_3,z_4,x,t,y)}[\log p_\theta(z_4 | z_1, z_2, z_3) - \log q_\phi(z_4 | x)] \\ &+ \sum_{i=1}^{3} \mathbb{E}_{p(x,t,y)}\mathbb{E}_{q_\phi(z_4 | x)}[-KL(q_\phi(z_i | pa_\phi(z_i)) || q_\phi(z_i))] - \sum_{j} KL(q_\phi(z_{ij}) || p_\theta(z_{ij} | pa(z_{ij}))) \\ &- KL(q_\phi(z_i) || \prod_{j} q_\phi(z_{ij})), \end{align*} \] (13) where \( pa(z) \) denote the parent nodes of \( z \) in Fig 1. Given auxiliary information \( T, Y \), the learned latent factors are identifiable. **Proposition 2** Assume the following hold: - \( f \) and \( g \) are injective in Eq. 10. - Let \( \psi_e \) be the characteristic function of \( p_e \). \( \{x \in X | \psi_e(x) = 0\} \) has measure zero. - Suppose \( z_1 \in \mathbb{R}^a, z_2 \in \mathbb{R}^b, \) and \( z_3 \in \mathbb{R}^c, a + b + c = n, \) then \( \lambda(t, y) = \lambda_1(y) \oplus \lambda_2(t, y) \oplus \lambda_3(t), \) where \( \lambda_1(y) \in \mathbb{R}^{2a}, \lambda_2(t, y) \in \mathbb{R}^{2b}, \lambda_3(t) \in \mathbb{R}^{2c} \) are parameters of gaussian distribution. - There exists \( 2n + 1 \) points \( (t_0, y_0) \ldots (t_{2n+1}, y_{2n+1}) \) such that the matrix \( L = [(f_1(y_1) - \lambda_1(y_0)) \oplus (\lambda_2(t_1, y_1) - \lambda_2(t_0, y_0)) \oplus (\lambda_3(t_1) - \lambda_3(t_0)), \ldots, (\lambda_1(y_{2n+1}) - \lambda_1(y_0)) \oplus (\lambda_2(t_{2n+1}, y_{2n+1}) - \lambda_2(t_0, y_0)) \oplus (\lambda_3(t_{2n+1}) - \lambda_3(t_0))] \) is invertible, i.e., \( \lambda = \lambda_1 \oplus \lambda_2 \oplus \lambda_3 \) where \( \lambda_1 \) is independent of \( t \), and \( \lambda_3 \) is independent of \( y \). - The sufficient statistics are differentiable almost everywhere. - Let \( k = f \circ g, \) then \( k(z_1, z_2, z_3) = k_1(z_1) \oplus k_2(z_2) \oplus k_3(z_3) \oplus k_4(z_1, z_2) \oplus k_5(z_1, z_3) \oplus k_6(z_2, z_3) \oplus k_7(z_1, z_2, z_3) \) satisfies \( Range(k_i) \cap Range(k_j) = \emptyset \). then if \( p_\theta(x|t,y) = p'_\theta(x|t,y) \) we have \[ k^{-1}(x) = \text{diag}(a)k'^{-1}(x) + b. \] (14) Hence, agreement on observational distribution, in our case the covariates \( X \), implies that the underlying generating model parameter is uniquely determined. Moreover, as indicated in A, such identification can be done up to translation and scaling. The derivation of the derived ELBO in **Theorem 1** enables us to learn identifiable latent representations for adjustments, confounders, and instruments. We add two estimators on top of the balanced prognostic score and balancing score. Estimating the selected treatment using the balancing score allows us to more accurately identify the root cause of selection bias. Furthermore, estimating the outcome using the balanced prognostic score enables us to obtain more robust outcome estimations across different treatments. The overall loss is derived as: \[ \mathcal{L} = \mathcal{L}_{\text{prognostic score}} + \mathcal{L}_{\text{ELBO}} + \mathcal{L}_{\text{balancing score}}. \] (15) And the loss for the ELBO is: \[ \mathcal{L}_{\text{ELBO}} = \] \begin{align*} &\mathbb{E}_{p(x,t,y)q_\phi(z_4|x)}[\log p_\theta(x|z_4)] - \mathbb{E}_{q_\phi(z_1,z_2,z_3,x,t,y)}[\alpha_4(\log q_\phi(z_4|x) - \log q_\phi(z_4)) \\ &+ \beta_4(\log q_\phi(z_4) - \log q_\phi(\prod_j z_{4j})) + \gamma_4(\log q_\phi(\prod_j z_{4j}) - \log p_\theta(z_4|z_1,z_2,z_3))] \\ &+ \sum_{i=1}^{3} \alpha_i \mathbb{E}_{p(x,t,y)q_\phi(z_i|x)}[-KL(q_\phi(z_i|pa_\phi(z_i))||q_\phi(z_i)) - \beta_i \sum_j KL(q_\phi(z_{ij})||p_\theta(z_{ij}|pa(z_{ij}))) \\ &- \gamma_i KL(q_\phi(z_i)||\prod_j q_\phi(z_{ij}))]. \end{align*} (16) in which we introduced the ELBO decomposition trick (Chen et al., 2018) to learn better disentangled representations. $L_{prognostic\ score}$ is the loss of the outcome predictor, where we can use the loss function of any downstream treatment effect estimators such as (Shalit et al., 2017; Hassanpour & Greiner, 2019a; Künzel et al., 2019; Yao et al., 2018), and $L_{balancing\ score}$ is the loss of the treatment predictor, where we predict the treatment using the identifiable balancing score. 5 EXPERIMENTS Our experiments aim to answer the following questions: Q1: Can our method effectively handle the limited overlap problem? Q2: Is our method robust when faced with varying degrees of limited overlap? Q3: Can our method successfully address the limited overlap problem within the structured treatment setting? Q4: How does our method perform in scenarios with zero overlap? To evaluate our approach, we conduct experiments on synthetic and semi-synthetic datasets, considering both within-sample and out-sample settings. 5.1 EXPERIMENTAL SETUP Dataset. We conducted experiments on three datasets, and the detailed information can be found in the Appendix. First, IHDP, a de facto semi-synthetic benchmark compiled by Hill (2011) to study the treatment effect of home visit on future cognitive test scores. We follow the same setting as Johansson et al. (2016); Shalit et al. (2017); Louizos et al. (2017), averaging over 1000 replications of simulated outcomes with a 63/27/10 train/validation/test split. Second, we synthesized a more challenging synthetic dataset to assess the performance of our method under different degrees of limited overlap. Third, drawing inspiration from Kaddour et al. (2021), we designed a structured treatment dataset using scaffold split (Ramsundar et al., 2019). This dataset required us to perform zero-shot/zero-overlap treatment effect estimation on out-of-distribution treatments. For further details regarding the synthetic datasets, please refer to the Appendix. Baselines. We choose BLR, BNN (Johansson et al., 2016), BART (Chipman & McCulloch, 2016; Chipman et al., 2010), RF (Breiman, 2001), CF (Wager & Athey, 2018), CEVAE (Louizos et al., 2017), GANITE (Yoon et al., 2018), $\beta$-intact-VAE (Wu & Fukumizu, 2021), DR-CFR (Hassanpour & Greiner, 2019b), SIN (Kaddour et al., 2021) as baselines. In particular, we included $\beta$-intact-VAE as a comparable baseline that primarily addresses limited overlap. SIN was chosen due to its ability to handle structured treatment settings. We also selected DR-CFR, a disentanglement learning method, to compare its performance against our proposed DIRE in the limited overlap setting. 5.2 RESULTS ON IHDP (Q1) We adopt two metrics to evaluate the methods. Individual-based evaluation metric, $PEHE = \sqrt{\sum_{i=1}^{N} ((y_{1i} - y_{0i}) - (\tau_{1i} - \tau_{0i}))^2}$ and population-based metric, $\epsilon_{ATE} = |\sum_{i=1}^{N} (\tau_{1i} - \tau_{0i}) - \sum_{i=1}^{N} (y_{1i} - y_{0i})|$. Results are depicted in Tab. 2, where the best results for each metric is bolded, and the runner-ups are underlined. 1Results are taken directly from Shalit et al. (2017); Louizos et al. (2017); Yoon et al. (2018); Wu & Fukumizu (2021). Table 1: IHDP Results. | Method | within-sample | out-sample | |-----------------|---------------|------------| | | PEHE | εATE | PEHE | εATE | | OLS-1 | 5.8 ± .3 | .73 ± .04 | 5.8 ± .3 | .94 ± .06 | | OLS-2 | 2.4 ± .1 | .14 ± .01 | 2.5 ± .1 | .31 ± .02 | | BLR | 5.8 ± .3 | .72 ± .04 | 5.8 ± .3 | .93 ± .05 | | k-NN | 2.1 ± .1 | .14 ± .01 | 4.1 ± .2 | .79 ± .05 | | BART | 2.1 ± .1 | .23 ± .01 | 2.3 ± .1 | .34 ± .02 | | RF | 4.2 ± .2 | .73 ± .05 | 6.6 ± .3 | .96 ± .06 | | CF | 3.8 ± .2 | .18 ± .01 | 3.8 ± .2 | .40 ± .03 | | BNN | 2.2 ± .1 | .37 ± .03 | 2.1 ± .1 | .42 ± .03 | | CFR-WASS | .71 ± .0 | .25 ± .01 | .76 ± .0 | .27 ± .01 | | CEVAE | 2.7 ± .1 | .34 ± .01 | 2.6 ± .1 | .46 ± .02 | | GANITE | 1.9 ± .4 | .43 ± .05 | 2.4 ± .4 | .49 ± .05 | | Beta-Intact-VAE | 0.709 ± .024 | .180 ± .007| 0.946 ± .048 | .211 ± .011| | DIRE | **0.475 ± 0.006** | **0.130 ± 0.003** | **0.520 ± 0.011** | **0.141 ± 0.003** | As shown in Table 2, DIRE consistently outperforms all other baseline methods across all evaluation metrics. Notably, even though Wu & Fukumizu (2021) primarily focuses on the post-treatment setting, DIRE achieves a significant improvement over β-Intact-VAE. Furthermore, since DIRE also generalizes its identification capability to the out-sample setting, we have achieved state-of-the-art (SOTA) results in the out-sample scenario as well. 5.3 Results on Synthetic Dataset (Q2) To assess the effectiveness of our method across different degrees of limited overlap, we conducted experiments using five non-overlapping levels denoted as ω, where a higher value of ω indicates a more severe non-overlapping scenario. For each non-overlapping level, we examined 27 configurations by varying the dimensions of the latent variable, specifically \( \text{dim } v \in \{4, 8, 10\} \). Our data generation process differs from that of Wu et al. (2021) in that we also consider \( Z_3 \) as a source of selection bias. This additional factor makes it more challenging to derive a low-dimensional balanced prognostic score from the covariates. To ensure fair comparison, we conduct hyperparameter search using Li et al. (2020) on a hold-out validation dataset and select the best hyperparameters over 30 runs. The results, depicted in Figure 2, include both in-sample (Figure 2(a)) and out-sample (Figure 2(b)) evaluations. We observed that even in the in-sample scenario, β-Intact-VAE struggles to generate a balanced prognostic score in the presence of instruments, where the overlapping condition is not necessary. The performance of DR-CFR diminishes as the limited overlapping level becomes more severe, as evident from Figure 2 when ω is set to 10 or 15. In contrast, DIRE exhibits robustness across all limited overlapping levels, with its performance remaining unaffected or even improving in more severe cases. This highlights the efficacy of learning a balanced prognostic score and a balancing score simultaneously in DIRE. 5.4 Results on Structured Treatments Dataset (Q3&Q4) The structured treatment setting presents additional challenges due to the involvement of multiple treatments, where even slight variations in the treatment structure result in a different treatment. As ![Figure 2: Synthetic Dataset Result.](image-url) such, we investigate the out-of-distribution treatment setting to see if our learned balanced prognostic score enables us to generalize under the out-of-distribution zero-shot setting. Given that $\beta$-intact-VAE (Wu & Fukumizu, 2021) cannot handle the structured treatment problem, we mainly compare with SIN (Kaddour et al., 2021) whose $g(X)$ representation naturally serves as a balanced prognostic score as well. We use the evaluation metric proposed by Kaddour et al. (2021), where $\epsilon_{\text{UPEHE}}(\text{WPEHE}) = \int_X (\hat{\tau}(t', t, x) - \tau(t', t, x))^2 p(t|x)p(t'|x)p(x)dx$. PEHE@K is computed over the top $K$ treatments ranked by propensities with $\binom{K}{2}$ combinations. To ensure fair comparison, we conduct hyperparameter search using Li et al. (2020) on a hold-out validation dataset and select the best hyperparameters over 100 runs. For more detail refer to the appendix. The results are shown in Tab. 3, where the best results for each metric is bolded, and the runner-ups are underlined. Table 2: CATE Estimation Error measured at PEHE@10, averaged over 25 random seeds. | Method | Weighted PEHE | Unweighted PEHE | |----------------------|---------------|-----------------| | | Within-Sample | Out-Sample | Within-Sample | Out-Sample | | ZERO | 24.05 ± 2.20 | 15.47 ± 1.54 | 24.60 ± 0.97 | 16.00 ± 0.69 | | SIN | 23.93 ± 1.33 | 16.00 ± 1.20 | 24.86 ± 0.85 | 16.76 ± 0.70 | | SIN-With-Aux-Info | 23.94 ± 2.19 | 15.42 ± 1.53 | 24.38 ± 0.95 | 15.93 ± 0.69 | | DIRE | **7.87 ± 0.50** | **10.44 ± 0.96** | **8.54 ± 0.33** | **11.89 ± 0.65** | SIN does not effectively utilize the auxiliary information and performs worse than zero. Even when provided with auxiliary information $T$ (a vector of molecular properties used as the treatment), SIN still struggles to learn a stable balanced prognostic score (bPGS), with its performance being similar to zero. In contrast, DIRE successfully identifies the confounding factors even when faced with out-of-distribution treatment $j_k$ in the zero-overlapping scenario, as outlined in Assumption 4.2. This demonstrates that only DIRE effectively learns a balanced prognostic score, while the other methods fall short in this regard. 6 CONCLUSION This paper addresses the challenge of limited overlap in treatment effect estimation by proposing a method that allows for the identification of latent adjustments, confounders, and instruments. By leveraging these latent factors, we can relax the requirement of overlapping conditions and extend our estimation to non-overlapping regions. Moreover, our method enables generalization to out-of-distribution treatments with zero overlap. The experimental results demonstrate the superiority of our proposed method across various benchmarks, highlighting its effectiveness and versatility. REFERENCES Victoria Allan, Sreeram V Ramagopalan, Jack Mardekian, Aaron Jenkins, Xiaoyan Li, Xianying Pan, and Xuemei Luo. Propensity score matching and inverse probability of treatment weighting to address confounding by indication in comparative effectiveness research of oral anticoagulants. *Journal of comparative effectiveness research*, 9(9):603–614, 2020. Susan Athey and Guido W Imbens. The state of applied econometrics: Causality and policy evaluation. *Journal of Economic perspectives*, 31(2):3–32, 2017. Susan Athey, Guido W Imbens, and Stefan Wager. Approximate residual balancing: debiased inference of average treatment effects in high dimensions. *Journal of the Royal Statistical Society Series B: Statistical Methodology*, 80(4):597–623, 2018. Peter C Austin. An introduction to propensity score methods for reducing the effects of confounding in observational studies. *Multivariate behavioral research*, 46(3):399–424, 2011. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. *IEEE transactions on pattern analysis and machine intelligence*, 35(8):1798–1828, 2013. Leo Breiman. Random forests. *Machine learning*, 45:5–32, 2001. Ricky TQ Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. Isolating sources of disentanglement in variational autoencoders. *Advances in neural information processing systems*, 31, 2018. Hugh Chipman and Robert McCulloch. Bayestree: Bayesian additive regression trees. *R package version 0.3-1.4*, 7, 2016. Hugh A Chipman, Edward I George, and Robert E McCulloch. Bart: Bayesian additive regression trees. 2010. Richard K Crump, V Joseph Hotz, Guido W Imbens, and Oscar A Mitnik. Dealing with limited overlap in estimation of average treatment effects. *Biometrika*, 96(1):187–199, 2009. Neil M Davies, Matt Dickson, George Davey Smith, Gerard J Van Den Berg, and Frank Windmeijer. The causal effects of education on health outcomes in the uk biobank. *Nature human behaviour*, 2(2):117–125, 2018. Angus Deaton and Nancy Cartwright. Understanding and misunderstanding randomized controlled trials. *Social science & medicine*, 210:2–21, 2018. Max H Farrell. Robust inference on average treatment effects with possibly more covariates than observations. *Journal of Econometrics*, 189(1):1–23, 2015. David A Freedman and Richard A Berk. Weighting regressions by propensity scores. *Evaluation review*, 32(4):392–409, 2008. Arthur Gretton, Alex Smola, Jiayuan Huang, Marcel Schmittfull, Karsten Borgwardt, Bernhard Schölkopf, et al. Covariate shift by kernel mean matching. *Dataset shift in machine learning*, 3(4):5, 2009. Mary Grzybowski, Elizabeth A Clements, Lori Parsons, Robert Welch, Anne T Tintinalli, Michael A Ross, and Robert J Zalenski. Mortality benefit of immediate revascularization of acute st-segment elevation myocardial infarction in patients with contraindications to thrombolytic therapy: a propensity analysis. *JAMA*, 290(14):1891–1898, 2003. Ben B Hansen. The prognostic analogue of the propensity score. *Biometrika*, 95(2):481–488, 2008. Negar Hassanpour and Russell Greiner. Counterfactual regression with importance sampling weights. In *IJCAI*, pp. 5880–5887, 2019a. Negar Hassanpour and Russell Greiner. Learning disentangled representations for counterfactual regression. In *International Conference on Learning Representations*, 2019b. Jennifer L Hill. Bayesian nonparametric modeling for causal inference. *Journal of Computational and Graphical Statistics*, 20(1):217–240, 2011.
QIrYb3Vlze
What does it mean by “as it requires two times of the score model evaluations”? Can it be understood without knowing the paper by Kwon et al. [1]? I also read Section 2.2, but it is still not understandable. Why does the method in [1] require sampling twice? and why does not it happen in the proposed method in this paper?
ISOMETRIC REPRESENTATION LEARNING FOR DISENTANGLING LATENT SPACE OF DIFFUSION MODELS Anonymous authors Paper under double-blind review ABSTRACT Diffusion models have made remarkable progress in capturing and reproducing real-world data. Despite their success and further potential, their latent space, the core of diffusion models, mostly still remains unexplored. In fact, the latent spaces of existing diffusion models still do not align close with the human perception, entangling multiple concepts in a distorted space. In this paper, we present Isometric Diffusion, equipping a diffusion model with isometric representation learning to better reflect human intuition and understanding of visual data. Specifically, we propose a novel loss to promote isometry of the mapping between the latent space and the data manifold, enabling a semantically and geometrically better latent space. This approach allows diffusion models to learn a more disentangled latent space, enabling smoother interpolation and precise control over attributes directly in the latent space. Our extensive experiments demonstrate the effectiveness of Isometric Diffusion, suggesting that our method helps to align latent space with perceptual semantics. This work paves the way for fine-grained data generation and manipulation. 1 INTRODUCTION Generative models produce images, texts, or other types of data by learning the distribution of the observed samples in its latent space and how to map it to the actual data space. In general, we desire the latent space to reflect the human perception. That is, we wish we could find a linear subspace of the latent space that is aligned with an attribute that human perceives important to distinguish the observed samples. Equivalently, we would locate samples that look semantically similar to human nearby, and vice versa, in the latent space. Such a latent space easily disentangles the key attributes from the human’s perspective, allowing us to control the generated samples as desired. Recently, diffusion models (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Ho et al., 2020; Song et al., 2020b) have achieved unprecedented success across multiple fields, including image generation (Dhariwal & Nichol, 2021; Nichol et al., 2021; Ramesh et al., 2022; Saharia et al., 2022; Rombach et al., 2022), image editing (Kawar et al., 2023; Ruiz et al., 2023; Hertz et al., 2022), and video generation (Ho et al., 2022; Blattmann et al., 2023). However, compared to other generative models like generative adversarial network (GAN) (Goodfellow et al., 2014) or variational autoencoder (VAE) (Kingma & Welling, 2013), there are few studies exploring the latent space of diffusion models. Due to their iterative sampling process that progressively removes noise from random initial vectors, it is complicated to analyze or manipulate the latent vectors. A naive latent walking by linear interpolation between two latent vectors, for example, turns out to produce unwanted intermediate images, as illustrated in Fig. 1 (top). A couple of recent works report important observations about the latent space \( \mathcal{X} \) learned by diffusion models. First of all, Kwon et al. (2023) discovers that a diffusion model already has a semantic latent space \( \mathcal{H} \) in the intermediate feature space of its score model. They suggest that \( \mathcal{H} \) is semantically well-defined and locally Euclidean, and thus linear perturbations in \( \mathcal{H} \) would lead to approximately linear changes in semantic attributes. However, manipulating attributes indirectly through \( \mathcal{H} \) is not fully desirable. One reason is additional computations accompanied with this indirect manipulation, as it requires two times of entire reverse diffusion process. According to Kwon et al. (2023), asymmetric reverse process is required for image change, and this requires two independent inferences of the score model with different inputs: \( \epsilon_t(x_t) \) and \( \hat{\epsilon}_t(x_t) = \epsilon_t(x_t; f(x_t, t)) \), where \( f \) is an additional neural network to find the editing direction. Another computational cost comes from training \( f \) to find local editing directions at every point of \( H \) for accounting every time after stepping forward in \( H \). With this indirect approach, a clear relationship between \( X \) and \( H \) has not been established, leaving it as an open question how to directly manipulate a particular attribute from the latent vector \( x \in X \) instead of \( (x, h) \in X \otimes H \). A subsequent work (Park et al., 2023b) suggests that a spherical linear interpolation (Slerp) in \( X \) is close to geodesic in \( H \), which implies it approximates a linear interpolation (Lerp) in \( H \). This discovery indicates that we may be able to manipulate semantics of a generated image directly in \( X \), with some care on the spherical geometry of the latent space. To illustrate, we explore \( X \) by sequentially generating images on a spherically interpolated trajectory between two latent vectors, \( x, x' \in X \). Fig. 1(mid) illustrates that it is not a geodesic on the data manifold; on the trajectory between two men, it unnecessarily goes through an woman. This can be interpreted that there exists some distortion in the latent space of diffusion models, implying that they fail to adequately preserve the geometric structure of the data manifold. In other words, the latent space and perceptual semantics do not align well. Such a misalignment often leads to entanglement of multiple semantic concepts, making it tricky to conduct fine-grained manipulations. Motivated from the desire to directly align the latent space with the data manifold, we present Iso-metric Diffusion, a diffusion model equipped with isometric representation learning, where isometry is a distance preserving map between metric spaces, which also preserves geodesics. More specifically, we introduce a novel loss to encourage isometry between \( X \) and the data manifold. With this additional supervision, the learned \( X \) allows semantically disentangled geodesic traversal and smoother interpolation with less abrupt changes when navigating \( X \), as illustrated in Fig. 1(bottom). We demonstrate the effectiveness of our proposed method through extensive experiments, both quantitatively and qualitatively with several widely-used metrics, on multiple datasets. 2 Latent Space of Diffusion Models In this section, we briefly review the latent spaces of diffusion models and illustrate the objective to achieve a better disentangled latent space. 2.1 Latent Space \( X \) of Diffusion Models Given an observed image space, denoted by \( X_0 \), the forward process of diffusion models repeatedly perturbs an image \( x_0 \in X_0 \) by \( x_t = \sqrt{\alpha_t} x_0 + \sqrt{1 - \alpha_t} \epsilon_0 \), with noise \( \epsilon_0 \sim N(0, I) \) for \( t = 1, ..., T \) and \( \alpha_t = \prod_{i=1}^{t} \alpha_i \). These perturbed images \( x_t \) construct a chain of latent spaces for \( t = 1, ..., T \), and the image space at each time step \( t \) is denoted by \( X_t \). For simplicity, we denote \( X_T = X \). To recover the original image \( x_0 \) from \( x_T \), diffusion models train a score model \( s_\theta \) by minimizing the following denoising score matching loss (Vincent [2011], Song et al. [2020b]): $$L_{\text{dsm}} = \mathbb{E}_t \left\{ \lambda(t) \mathbb{E}_{x_0} \mathbb{E}_{x_t | x_0} \left[ \| s_\theta(x_t, t) - \nabla_{x_t} \log p_t(x_t | x_0) \|_2^2 \right] \right\},$$ where $\theta$ is a set of learnable parameters of the score model and $\lambda(t)$ is a positive weighting function. With the trained $s_\theta$, we can generate an image $x_0$ from a sample $x_T \sim \mathcal{N}(0, I)$ through the reverse diffusion process. Here, the distribution of the norm of completely noised images $\|x_T\|_2$ follows a $\chi$-distribution, and they are distributed on the shell of a sphere, not uniformly within the sphere (see Sec. 3.1 for more details). For this reason, linearly interpolating two images within $\mathcal{X}$, as shown in Fig. 1 (top), results in path far from geodesic on the data manifold, while spherical linear interpolation follows a shorter path. As seen in Fig. 1 (mid), however, the spherical linear interpolation is still semantically not disentangled, indicating that $\mathcal{X}_T$ is not isometric to the data manifold. ### 2.2 Intermediate Latent Space $\mathcal{H}$ as a Semantic Space Kwon et al. [2023] claims that the learned intermediate feature space $\mathcal{H}$ of the score model $s_\theta$ sufficiently preserves the semantics of the observed images. They report that a linear scaling by $\Delta h$ on $\mathcal{H}$ controls the magnitude of semantic changes, and applying the same $\Delta h$ on a different sample results in a similar magnitude of effect. This implies that, by minimizing the loss in Eq. (1), $\mathcal{H}$ reasonably learns the low-dimensional data manifold with its geometry preserved and $\mathcal{H}$ is close to isometric to the data manifold. Therefore, we claim that as the mapping from $\mathcal{X}$ to $\mathcal{H}$ becomes closer to isometric, the mapping of the data manifold from $\mathcal{X}$ can also become more isometric. The advantages by achieving this objective is covered in Appendix E. Motivated from these observations, we aim to train the encoder of the score model in a way to ensure isometry. By aligning a spherical trajectory in $\mathcal{X}$ with a geodesic in $\mathcal{H}$, our encoder paves the way for a more coherent utilization of $\mathcal{X}$ as a semantic space. ### 3 Isometric Representation Learning for Diffusion Models The goal of our work is to learn a latent space $\mathcal{X}$ which reflects semantics perceived by human. As this is not straightforward to achieve directly, we rely on a recent observation by Kwon et al. [2023] that the bottleneck layers $\mathcal{H}$ in diffusion models reasonably reflect semantics (Sec. 2.2). Thus, instead of building a semantic latent space from scratch, our approach aims to learn a geodesic-preserving mapping between $\mathcal{X}$ and $\mathcal{H}$. For this, we claim that a scaled isometric mapping (Lee et al. [2021]) guides the encoder of the diffusion model to preserve geodesics between the two spaces (Sec. 3.2), between an approximated spherical latent space $\mathcal{X}'$ (Sec. 3.1) and the semantic latent space $\mathcal{H}$. Fig. 3 illustrates the overall flow of our approach. With stereographic coordinates for $\mathcal{X}$ and Cartesian coordinates for $\mathcal{H}$ as local coordinates, respectively, we equip with an appropriate Riemannian metric to the local coordinate spaces. Then, we guide the encoder of the score model to map from $\mathcal{X}$ to $\mathcal{H}$ so as to preserve geodesic between them. Lastly, we discuss computational considerations (Sec. 3.3). **Illustration.** Before introducing our method, we first illustrate the purpose of isometric representation learning with a toy autoencoder model, learning an encoding map from $S^2$ to $\mathbb{R}^2$. The autoencoder is trained with the reconstruction loss, regularized with the isometric loss in Eq. (6). Figure 3: Illustration of $\mathcal{X}$, $\mathcal{H}$, and local coordinates of those two manifolds. Our isometric loss regularizes the encoder of the score model to map a spherical trajectory in $\mathcal{X}$ to a linear trajectory in $\mathcal{H}$, preserving a geodesic in $\mathcal{X}$ to a geodesic in $\mathcal{H}$. $\Pi_{n-1}$, $\Phi$ are charts mapping from Riemmanian manifolds to local coordinate spaces. $z$, $z'$ denote the local coordinates of $\mathcal{X}$, $\mathcal{H}$, respectively. Fig. 2 illustrates an autoencoder flattening the given $S^2$ manifold in (a) with three different losses. Only with reconstruction loss in (b), we see that the manifold is significantly distorted, points far away in the input often are located closely. We observe less distortion with the isometric loss under the assumption of the Euclidean metric in local coordinates of $S^2$ ($G = I$) in (c), but it still does not preserve geodesic. With our full loss in (d), we may see that the geometry of input space is more preserved with $G = G_{\text{stereographic}}$ from Eq. (3). We provide more illustrations in Appendix B. Recall that the sampling process of diffusion models starts from a Gaussian noise, $x_T \sim \mathcal{N}(0, I_n) \in \mathbb{R}^n$, where $T$ is the number of reverse time steps. Then, the radii of Gaussian noise vectors $x_T$ follow $\chi$-distribution: $r = \sqrt{\sum_{i=1}^{n} x_{T,i}^2} \sim \chi(n)$, whose mean and variance are approximately $\sqrt{n}$ and variance of 1, respectively. For a sufficiently large $n$ (e.g., $n = 3 \times 256^2$ to generate an image of size $256 \times 256$), the noise vectors reside within close proximity of a hypersphere with $r = \sqrt{n}$. ### 3.1 Spherical Approximation of the Latent Space From this observation, we approximate the noise vectors $x \in \mathcal{X}$ (we omit subscripts to be uncluttered) reside on the hypersphere manifold $S^{n-1}(r) = \{x \in \mathbb{R}^n : \|x\| = r\}$. To define a Riemannian metric on $S^{n-1}(r)$, we need to choose charts and local coordinates to represent the Riemannian manifolds (Miranda [1995]). We choose the stereographic coordinates (Apostol [1974]) as the local coordinate to represent $\mathcal{X}$ and $\Phi = \text{id}$ following the linearity argument of $\mathcal{H}$ (Kwon et al. [2023]). Stereographic projection $\Pi_{n-1}: S^{n-1}(r) \setminus \{N\} \to \mathbb{R}^{n-1}$ is a bijective transformation from every point except for the north pole ($N$) on the hypersphere to a plane with north pole as the reference point. $\Pi_{n-1}$ and its inverse projection $\Pi_{n-1}^{-1}$ are given by $$\Pi_{n-1}(x) = \frac{1}{r - x_n}(x_1, x_2, \cdots, x_{n-1}), \quad \Pi_{n-1}^{-1}(z) = \frac{r}{|z|^2 + 1}(2z_1, 2z_2, \cdots, 2z_{n-1}, |z|^2 - 1).$$ In stereographic coordinates, the Riemannian metric of the $S^{n-1}(r)$ (do Carmo [1992]) is given by $$G_{\text{stereographic}}(z) = \frac{4r^4}{(|z|^2 + r^2)^2}I_{n-1}, \quad \forall z \in \mathbb{R}^{n-1}. \quad (3)$$ Recall that a diffusion model consists of a chain of latent spaces. Hence, it is needed to verify at every time step the validity of spherical approximation. From $x_t = \sqrt{\alpha_t}x_0 + \sqrt{1 - \alpha_t}\epsilon_0$, the variance of perturbation kernels is $\text{Var}[p(x_t|x_0)] = 1 - \alpha_t = 1 - e^{-f - \beta(t)dt}$ (Song et al. [2020b]). Figure 4: Scheduling of $\alpha$ We use a linear noise schedule \( \beta_t = \beta_0 (1 - \frac{t}{T}) + \beta_T \frac{t}{T} \) with \( \beta_t = 1 - \alpha_t \), where the variance schedule is illustrated in Fig. 4. We claim that for a sufficiently large \( t \), \( \sqrt{1 - \alpha_t} \approx 1 \) and thus the latent space can be approximated to a sphere. That is, we approximate \( X_t \approx S^{n-1}(r) \) with \( r = \sqrt{1 - \alpha_t} \cdot E[\chi(n)] \approx \sqrt{n(1 - \alpha_t)} \) for \( t > pT \), where we set \( p \in [0, 1] \) as a hyperparameter. ### 3.2 ISOMETRIC MAPPINGS **Definition.** An isometric mapping (or isometry) is a transformation between two metric spaces that globally preserves distances and angles. A mapping between two Riemannian manifolds \( e_\theta : M_1 \rightarrow M_2 \) (\( f \) in local coordinates; \( f = \Phi \circ e'_\theta \circ \Pi_{n-1}^{-1} \)) is a scaled isometry (Lee et al., 2021) if and only if \[ G(z) = c J_f(z)^T H(f(z)) J_f(z), \quad \forall z \in \mathbb{R}^{n-1}, \] where \( c \in \mathbb{R} \) is a constant, \( J_f(z) = \frac{\partial f}{\partial z}(z) \in \mathbb{R}^{(n-1) \times m} \) is the Jacobian of \( f \), \( G(z) \in \mathbb{R}^{(n-1) \times (n-1)} \) and \( H(z') \in \mathbb{R}^{m \times m} \) are the Riemannian metrics defined at the local coordinates \( z, z' \) of \( M_1 = \mathbb{R}^{n-1} \) and \( M_2 = \mathbb{R}^m \), respectively. Equivalently, \( f \) is a scaled isometry if and only if \( J_f^T H J_f G^{-1} = c I \) where \( c \in \mathbb{R} \) is a global constant. If \( c = 1 \) globally, \( f \) is a strict isometry. Scaled isometry allows the constant \( c \) to vary, preserving only the scaled distances and angles. This relaxation makes it easier to optimize a function to preserve geodesic with less restrictions. In our problem formulation, \( M_1 = S^{n-1}(X) \), \( M_2 = \mathbb{R}^m(H) \), and \( H(z') = I_m \), as introduced in Sec. 3.1. Although evaluation of \( J_f^T H J_f G^{-1} \) is coordinate-invariant, our choice of stereographic coordinates is computationally advantageous, as its Riemannian metric in Eq. (3) is proportional to the identity matrix. **Geodesic-preserving Property.** In order for an encoding mapping from \( X \) to \( H \) to respect the semantic structure embedded in the image space, we would like to make this mapping geodesic-preserving. We claim that the scaled isometry leads to a geodesic-preserving mapping \[ \arg \min_{\gamma(t)} \int_0^1 \sqrt{\dot{\gamma}(t)^T G(\gamma(t)) \dot{\gamma}(t)} dt = \arg \min_{\gamma(t)} \int_0^1 \sqrt{\dot{\gamma}(t)^T J(\gamma(t))^T H(f(\gamma(t))) J(\gamma(t)) \dot{\gamma}(t)} dt, \] for an arbitrary trajectory \( \gamma : [0, 1] \rightarrow \mathbb{R}^n \) in local coordinates of \( M_1 \) with fixed endpoints \( (\gamma(0) = x_0, \gamma(1) = x_1) \), where \( x_0, x_1 \in \mathbb{R}^n \) are constant vectors and \( \dot{\gamma}(t) = \frac{d\gamma}{dt}(t) \). **Isometry Loss.** To sum up, we can encourage the mapping from \( X \) to \( H \) to preserve geodesics by regularizing \( R(z) = J_f(z)^T H(f(z)) J_f(z) G^{-1}(z) = c I \), for some \( c \in \mathbb{R} \). It can be achieved by minimizing the following isometry loss: \[ L_{iso}(e_\theta, t) = \frac{\mathbb{E}_{x_t \sim P(x_t)} [\text{Tr}(R^2(z_t))]}{\mathbb{E}_{x_t \sim P(x_t)} [\text{Tr}(R(z_t))]^2} = \frac{\mathbb{E}_{x_t \sim P(x_t)} \mathbb{E}_{v \sim N(0, I)} [v^T R(z_t)^T R(z_t) v]}{\mathbb{E}_{x_t \sim P(x_t)} \mathbb{E}_{v \sim N(0, I)} [v^T R(z_t) v]^2}, \] where \( P(x_t) \) is the noise probability distribution at timestep \( t \), and \( z_t = \Pi_{n-1}(x_t) \). The second equality holds due to the stochastic trace estimator (Hutchinson, 1989), where \( v \in \mathbb{R}^{n-1} \) is a random vector such that \( \mathbb{E}[vv^T] = I \). As a result, our final loss to train the score model is defined by \[ L = L_{dsm} + \lambda_{iso}(p, t)L_{iso}, \] where \( \lambda_{iso}(p, t) \) is a non-negative weighting function to control the relative importance of isometry regularizer for each \( X_t \) and \( p \in [0, 1] \) is the ratio of steps that we do not apply \( L_{iso} \). We use \( \lambda_{iso}(p, t) = \lambda_{iso} 1_{t' > pT}(t' = t) \) where \( 1(\cdot) \) is the indicator function, and the denoising process starts from \( t = T \). **Applying to Diffusion Models.** The isometric loss is not directly applicable to a diffusion model, since it iteratively generates the samples. To guide a geodesic mapping between \( h_T \in H \) and \( x_0 \) (an actual image), we may regularize each step of the iterative sequence; that is, the encoding map between \( x_i \) and \( h_i \) for \( i = 1, ..., T \). Instead of regularizing all steps, we may selectively apply it. For time steps closer to \( T \), samples are closer to a Gaussian, so our assumption may reasonably hold. For time steps closer to 0, however, samples are not sufficiently perturbed yet and thus they follow some intermediate distribution between the Gaussian and the original data distribution as described in Sec. 3.1. Hence, we may not assume these samples lie on \( S^{n-1} \) manifold. 3.3 Computational Considerations To sidestep the heavy computation of full Jacobian matrices, we use stochastic trace estimator to substitute the trace of Jacobian to Jacobian-vector product (JVP). Exploiting the commutativity of the Riemmanian metric in stereographic coordinates, we utilize \( \mathbb{E}_{v \sim \mathcal{N}(0, I)}[v^\top J^\top J G^{-1} v] = \mathbb{E}_{v \sim \mathcal{N}(0, I)}[v^\top \sqrt{G^{-1}} J^\top J \sqrt{G^{-1}} v] \) to reduce the number of JVP evaluations. We provide more details about the computation of stochastic trace estimator in Appendix A.2. 4 Experiments We conduct extensive experiments to verify the effectiveness of our method to diffusion models and corroborate that latent space of diffusion models can be disentangled with isometric loss \( L_{iso} \). 4.1 Experimental Settings Dataset. We evaluate our approach on CIFAR-10, CelebA-HQ (Huang et al., 2018), LSUN-Church (Wang et al., 2017), and LSUN-Bedrooms (Wang et al., 2017). The training partition of each dataset consists of 50,000, 14,342, 126,227, and 3,033,042 samples, respectively. We resize each image to \( 256 \times 256 \) except for CIFAR-10 and horizontally flip it with probability 0.5. Evaluation Metrics. Fréchet inception distance (FID) (Heusel et al., 2017) is a widely-used metric to assess the quality of images created by a generative model by comparing the distribution of generated images with that of ground truth images. Perceptual Path Length (PPL) (Karras et al., 2019) evaluates how well the generator interpolates between points in the latent space, defined as \( \text{PPL} = \mathbb{E}\left[\frac{1}{\tau^2} d(x_t, x_{t+\tau})\right] \), where \( d(\cdot, \cdot) \) is a distance function. We use LPIPS (Zhang et al., 2018) distance using AlexNet (Krizhevsky et al., 2012) for \( d \). A lower PPL indicates that the latent space is better disentangled, since when two or more axes are entangled and geodesic interpolation in \( X \) induces a sub-optimal trajectory in the semantic space, the LPIPS distance gets larger and thereby so does the PPL. For experimentation, we perform 20 and 100 steps of DDIM sampling for FID and PPL, computed with 10,000 and 50,000 images, respectively. Linear separability (LS) (Karras et al., 2019) measures the degree of disentanglement of a latent space, by measuring how much the latent space is separable by a hyperplane. Mean condition number (MCN) and variance of Riemannian metric (VoR) measure how much a mapping is close to a scaled-isometry, proposed by Lee et al. (2021). We provide further details on these metrics in Appendix D. We additionally design a new metric called mean Relative Trajectory Length (mRTL), measuring the extent to which a trajectory in \( X \) is mapped to geodesic in \( H \). Specifically, mRTL is defined as the mean ratio between the L2 distance \( d_2(t) \) between \( h, h' \in H \) features corresponding to two latents \( x, x' \in X \) and another distance measured on the manifold \( d_M(t) \), following along a path on \( \{H_t\} \). That is, \( \text{RTL}(t) = \mathbb{E}_{x, x' \in X}[d_M(t)/d_2(t)] \) and \( \text{mRTL} = \mathbb{E}_t[\text{RTL}(t)] \), where \( t \) denotes the timesteps of the sampling schedule. Intuitively, it represents the degree of isometry of the encoder \( f \). Implementation Details. Our network architecture follows the backbone of DDPM (Ho et al., 2020), which uses a U-Net (Ronneberger et al., 2015) internally. We take a DDPM (Ho et al., 2020) pre-trained on CelebA (Liu et al., 2015) as a starting point, and further train it with each competing method until it achieves the lowest FID. If not specified, we train with batch size 32, learning rate \( 10^{-4} \), \( p = 0.5 \), and \( \lambda_{iso} = 10^{-4} \) for 10 epochs by default. We use Adam optimizer and exponential moving average (Brown, 1956) on model parameters with a decay factor of 0.9999. We set the number of inference steps to 100. We use 4 NVIDIA A100 GPUs with 40GB memory. 4.2 Quantitative Comparison Overall Comparison. In Tab. 1, 2, we quantitatively compares the performance of our method and DDPM (Base) in various metrics. The results indicate that the diffusion models trained with our isometric loss regularizer exhibit substantial drop (improvement) in PPL implying smoother transitions during latent traversals. Decrease of mRTL, MCN, and VoR signified the encoder of score model became successfully closer to scaled-isometry. For CelebA-HQ, LS and LS measured by SVM with radial basis function kernel significantly decreased, indicating the disentanglement of | Dataset | FID-10k↓ | PPL-50k↓ | mRTL↓ | MCN↓ | VoR↓ | |-----------------|----------|----------|-------|------|------| | CIFAR-10 | 10.27 | 12.50 | 105 | 76 | 2.03 | 1.92 | 155 | 107 | 0.50 | 0.57 | | CelebA-HQ | 15.89 | 16.18 | 648 | 570 | 2.67 | 2.50 | 497 | 180 | 1.42 | 0.85 | | LSUN-Church | 10.56 | 13.01 | 2028 | 1587 | 3.71 | 3.21 | 375 | 217 | 1.92 | 1.37 | | LSUN-Bedrooms | 9.49 | 11.95 | 4515 | 3809 | 3.38 | 3.21 | 320 | 186 | 1.69 | 1.12 | Table 1: **Quantitative comparison.** Diffusion models trained with our isometric loss achieve consistent improvement over the baseline on multiple datasets, with slight sacrifice in FID scores. | Dataset | LS ↓ | LS (radial) ↓ | |-----------------|------|---------------| | Base | Ours | Base | Ours | | CelebA-HQ | 4.39 | 2.65 | 12.3 | 6.8 | Table 2: **Quantitative comparison of linear separability (LS).** LS measures the disentanglement of latent space. This further implies better alignment between the latent space and semantic space, disentangling semantic components in the latent space, as desired. We notice a trade-off between FID and other metrics. Using our isometry loss, PPL and mRTL significantly drop, while FID sometimes marginally increases. In spite of slightly increased FID, however, the quality of the generated images is not significantly damaged, e.g., as seen in examples in Fig. V. With the improved PPL and mRTL, however, latent traversal gets smoother without abrupt changes, easing controlled image manipulation (see Sec. 4.3 for more details). **Mean Relative Trajectory Length.** Fig. 5 shows the measured Relative Trajectory Length (RTL) scores across the reverse timesteps in DDIM ($T = 20$). As the guidance of isometric loss gets larger with a larger $\lambda_{iso}$, the RTL tends to decrease, indicating the geodesic in $X$ (slerp) maps to geodesic in $\{\mathcal{H}_t\}$. We notice a significant drop when $t \leq 10$ especially with a larger $\lambda_{iso}$, where the isometric loss is applied. This indeed shows the isometric loss is accurately guiding the encoder of the score model to learn an isometric representation. ### 4.3 Analysis on the Disentanglement of Latent Space $X$ **Interpolation.** We first conduct traversals on the latent space $X$ between two points $x, x' \in X$, illustrating the generated images from interpolated points between them in Fig. 6. We observe that with our isometric loss the latent space is better disentangled, resulting in smoother transitions without abrupt changes in gender. More examples are provided in Fig. VII-VIII in Appendix H. **Linearity.** We also claim that the latent space $X$ learned with our isometric loss has a property of linearity. Specifically, we compare the generated images with ours to baseline. Both cases are naively moved along the slerp in their latent spaces. We illustrate this in Fig. 7 by demonstrating that a spherical perturbation on $X$ with various intensity of $\Delta x$ adds or removes specific attributes from the generated images accordingly. We find the editable direction by employing Local Basis (Jang et al., 2022), an unsupervised method for identifying semantic-factorizing directions in the latent space based on its local geometry, and perturb the latents through this direction both for baseline and our model. This method discovers the principal variations of the latent space in the neighborhood of the base latent code. As seen in Fig. 7, the baseline often changes multiple factors (age, gender) abruptly and inconsistently with $\gamma$ (e.g., when $\gamma = -1$ on the right example, it suddenly shows a male-like output), while ours show smoother changes. With previous diffusion-based image editing methods, one needed to take into account the geometry of $\mathcal{H}$ for every step in the editing trajectory (Park et al., 2023b). This requires computation of the Jacobian and its eigenvectors at every step forward in the trajectory via parallel transport along $\mathcal{H}$. This is usually approximated via a projection, referred as geodesic shooting. Using our isometric loss, on the other hand, the editing trajectory becomes closer to the trivial geodesic of the latent Figure 6: Examples of latent traversal between two images $x$ and $x'$ with DDPM \cite{Ho et al., 2020}, trained on $256 \times 256$ CelebA-HQ. We observe unnecessary changes of female $\rightarrow$ male in the baseline, while smoother transitions in ours. For quantitative support, we plot LPIPS distance between each adjacent frames (Blue: Baseline, Orange: Ours). Figure 7: **Linearity.** Images generated from a latent vector $x$ (corresponds to the boxed columns) and from slightly perturbed ones, $x + \gamma \Delta x$ with $\gamma \in \{-2, -1, 0, 1, 2\}$, where $\Delta x$ corresponds to the age axis. space; slerp in $\mathcal{X}$. Thus, we can directly move along the slerp in $\mathcal{X}$ without requiring any additional computations or approximations to find the editing direction of image. ### 4.4 Ablation Study Tab. [3] shows the ablation study on the choice of optimal $p$ and $G$. With $p = 0.5$ and $G = G_{\text{stereographic}}$, we observe the best performance in FID and PPL. FID increases with $p < 0.5$, while PPL improvement gets marginal when $p > 0.5$. Also, when calculating the isometric loss, using an appropriate Riemannian metric $G$ of the latent space turns out to be important. That is, the model with $G = G_{\text{stereographic}}$ achieves competitive FID and PPL scores at the same time, while either of them gets significantly worse with $G = I$. This result supports our spherical assumption on the latent space $\mathcal{X}$ of diffusion models and modeling it as a Riemannian manifold $S^{n-1}$ is indeed reasonable. | $p$ | $G$ | $\lambda_{\text{iso}}$ | FID-10k ↓ | PPL-50k ↓ | |-----|-----|----------------|-----------|-----------| | 1 | - | - | 15.89 | 653 | | 0 | I | $10^{-4}$ | 24.07 | 447 | | 0.5 | I | $10^{-3}$ | 30.28 | 441 | | 0.5 | I | $10^{-4}$ | 16.60 | 619 | | 0.5 | $G_{\text{stereographic}}$ | $10^{-4}$ | 16.18 | 570 | Table 3: **Ablation study** on $p$ (the ratio of steps to skip isometric loss) and $G$ (the choice of Riemannian metric). This experiment has been conducted on CelebA-HQ $256 \times 256$. ## 5 RELATED WORKS ### Diffusion models. Recently, diffusion models ([Sohl-Dickstein et al., 2015](#), [Song & Ermon, 2019](#), [Song et al., 2020b](#)) have achieved a great success in eclectic fields, containing image generation ([Dhariwal & Nichol, 2021](#), [Baranchuk et al., 2021](#), [Choi et al., 2021b](#), [Sehwag et al., 2022](#), [Meng et al., 2023](#)), image synthesis ([Meng et al., 2021](#), [Tumanyan et al., 2023](#), [Liu et al., 2023](#)), video generation ([Ho et al., 2022](#), [Blattmann et al., 2023](#)) and sound generation ([Yang et al., 2023](#)). From a pure Gaussian noise, DDPM ([Ho et al., 2020](#)) samples the image by predicting the next distribution using Markov chain property. With non-Markovian process, DDIM ([Song et al., 2020a](#)) accelerates the denoising process of DDPM by skipping sampling steps. ### Latent Space of Generative Models. On traditional Generative Adversarial Networks (GANs) ([Goodfellow et al., 2014](#), [Radford et al., 2015](#), [Zhu et al., 2017](#), [Choi et al., 2018](#), [Ramesh et al., 2018](#), [Härkönen et al., 2020](#), [Abdal et al., 2021](#)) models, StyleGAN ([Karras et al., 2019](#)) is a pioneering work on latent space analysis and improvement. In StyleGANv2 ([Karras et al., 2020](#)), a path length regularizer guides the generator to learn an isometric mapping from the latent space to the image space. Recently, additional studies on GANs ([Shen et al., 2020a,b](#), [Shen & Zhou, 2021](#)) and VAEs ([Hadjeres et al., 2017](#), [Zheng & Sun, 2019](#), [Zhou & Wei, 2020](#)) have examined the latent spaces of generative models. [Kwon et al., 2023](#) found that the internal feature space of U-Net in diffusion models, $\mathcal{H}$, plays the same role as a semantic latent space. [Preechakul et al., 2022](#) discovered that using a semantic encoder enables the access to the semantic space of diffusion models. However, this method utilizes conditional diffusion model, while our work proposes a method that can directly utilize the latent space without any condition. ### Isometric Latent Space for Generative Models. There exist some previous works on utilizing Riemannian geometry to understand the latent spaces. ([Arvanitidis et al., 2021](#)) claimed understanding Riemannian geometry of latent space can improve analysis of representations as well as generative modeling. ([Chen et al., 2020](#)) proposed that interpreting the latent space as Riemannian manifold and regularizing the Riemannian metric to be a scaled identity help VAEs learn a good latent representation. ([Lee et al., 2021](#)) proposed an isometric regularization method for geometry-preserving latent space coordinates in scale-free and coordinate invariant form. However, due to the iterative property of diffusion models, unlike VAEs and GANs, it is demanding to apply isometric representation learning on diffusion models. Thus, to the best of our knowledge, no previous works have been done on applying an isometric mapping to the semantic space of diffusion models. ## 6 SUMMARY AND LIMITATIONS In this paper, we have addressed a critical issue in the field of generative models, specifically unconditional diffusion models. In spite of their advances in generating photorealistic samples, they have lagged behind in terms of understanding and controlling their latent spaces. The proposed approach, **Isometric Diffusion**, leverages isometric representation learning to bridge the gap between the latent space $\mathcal{X}$ and the data manifold. With a mapping from latent space to data manifold being close to isometry learned by our approach, we demonstrate that a more intuitive and disentangled latent space for diffusion models can be achieved both quantitatively and qualitatively. ### Limitations. Our proposed method is applicable primarily in noise spaces close to a Gaussian distribution, limiting its applicability. Overcoming this limitation would be an interesting direction for future work. ETHICS STATEMENT The proposed approach in this paper aims to ease the image or video editing, selectively adjusting certain aspects of them as intended. Our work shares ethical issues of generative models that are currently known in research community; to name some, deep fake, fake news, malicious editing to manipulate evidence, and so on. We believe our work does not significantly worsen these concerns in general, but a better disentangled latent semantic space with our approach might ease these abuse cases as well. Also, other relevant ethical issues regarding potential discrimination caused by a biased dataset still remain the same with our approach, neither improving nor worsening ethical concerns in this aspect. A collective effort within the entire research community and society will be important to keep generative models beneficial. REPRODUCIBILITY STATEMENT We submit our code used for experiments in this paper as a supplementary material. We also plan to publicly release this upon acceptance. The readers would be able to reproduce the reported results by running this code. We also describe the detailed experimental settings including hyperparameters and hardware environments we use in Sec. 4.1 and 4.4. REFERENCES Rameen Abdal, Peihao Zhu, Niloy J Mitra, and Peter Wonka. StyleFlow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous normalizing flows. ACM Transactions on Graphics (ToG), 40(3):1–21, 2021. T.M. Apostol. Mathematical Analysis. Addison-Wesley series in mathematics. Addison-Wesley, 1974. ISBN 9780201002881. Georgios Arvanitidis, Lars Kai Hansen, and Søren Hauberg. Latent space oddity: on the curvature of deep generative models, 2021. Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, and Artem Babenko. Label-efficient semantic segmentation with diffusion models. arXiv:2112.03126, 2021. Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. In CVPR, 2023. Robert G. Brown. Exponential smoothing for predicting demand, 1956. Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, and Patrick van der Smagt. Learning flat latent manifolds with vaes. arXiv:2002.04881, 2020. Jaewoong Choi, Junho Lee, Changyeon Yoon, Jung Ho Park, Geonho Hwang, and Myungjoo Kang. Do not escape from the manifold: Discovering the local coordinates on the latent space of gans. arXiv:2106.06959, 2021a. Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. arXiv:2108.02938, 2021b. Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, 2018. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. NIPS, 34, 2021. M.P. do Carmo. Riemannian Geometry. Mathematics (Birkhäuser) theory. Birkhäuser Boston, 1992. ISBN 9780817634902. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. NIPS, 27, 2014.
1op5YGZu8X
In the studied setting, the scale of pertubation is also related to the norm of $\partial_x \mathcal{L}$, rather than depending on $S$, which introduces a discrepancy between the setting studied and real-world setting.
THEORETICAL ANALYSIS OF ROBUST OVERFITTING FOR WIDE DNNs: AN NTK APPROACH Shaopeng Fu & Di Wang Provable Responsible AI and Data Analytics (PRADA) Lab King Abdullah University of Science and Technology, Saudi Arabia {shaopeng.fu, di.wang}@kaust.edu.sa ABSTRACT Adversarial training (AT) is a canonical method for enhancing the robustness of deep neural networks (DNNs). However, recent studies empirically demonstrated that it suffers from robust overfitting, i.e., a long time AT can be detrimental to the robustness of DNNs. This paper presents a theoretical explanation of robust overfitting for DNNs. Specifically, we non-trivially extend the neural tangent kernel (NTK) theory to AT and prove that an adversarially trained wide DNN can be well approximated by a linearized DNN. Moreover, for squared loss, closed-form AT dynamics for the linearized DNN can be derived, which reveals a new AT degeneration phenomenon: a long-term AT will result in a wide DNN degenerates to that obtained without AT and thus cause robust overfitting. Based on our theoretical results, we further design a method namely Adv-NTK, the first AT algorithm for infinite-width DNNs. Experiments on real-world datasets show that Adv-NTK can help infinite-width DNNs enhance comparable robustness to that of their finite-width counterparts, which in turn justifies our theoretical findings. The code is available at https://github.com/fshp971/adv-ntk. 1 INTRODUCTION Despite the advancements of deep neural networks (DNNs) in real-world applications, they are found to be vulnerable to adversarial attacks. By adding specially designed noises, one can transform clean data to adversarial examples to fool a DNN to behave in unexpected ways (Szegedy et al., 2013; Goodfellow et al., 2015). To tackle the risk, one of the most effective defenses is adversarial training (AT), which enhances the robustness of DNNs against attacks via training them on adversarial examples (Madry et al., 2018). However, recent study shows that AT suffers from robust overfitting: after a certain point in AT, further training will continue to degrade the robust generalization ability of DNNs (Rice et al., 2020). This breaks the common belief of “training long and generalize well” in deep learning and raises security concerns on real-world deep learning systems. While a line of methods has been developed to mitigate robust overfitting in practice (Yu et al., 2022; Chen et al., 2021; Wu et al., 2020; Li & Spratling, 2023), recent studies attempt to theoretically understand the mechanism behind robust overfitting. However, existing theoretical results mainly focus on analyzing the robustness of machine learning models that have already been trained to converge but overlook the changes of robustness during AT (Min et al., 2021; Bombari et al., 2023; Zhang & Li, 2023; Clarysse et al., 2023). More recently, a work by Li & Li (2023) has started to incorporate the training process into the study of robust overfitting. However, their analysis currently only applies to two-layer neural networks. Thus, we still cannot answer the question: Why a DNN would gradually lose its robustness gained from the early stage of AT during continuous training? Motivated by the recent success of neural tangent kernel (NTK) theory (Jacot et al., 2018) in approximating wide DNNs in standard training with closed-form training dynamics (Lee et al., 2019), this paper makes the first attempt to address the raised question by non-trivially extending NTK to theoretically analyze the AT dynamics of wide DNNs. Our main result is that for an adversarially trained multilayer perceptron (MLP) with any (finite) number of layers, as the widths of layers approach infinity, the network can be approximated by its linearized counterpart derived from Taylor expansion. When the squared loss is used, we further derive closed-form AT dynamics for the linearized MLP. The key challenge of our theory arises from the process of searching adversarial examples in AT. In the vanilla AT, the strength of adversarial examples used for training is controlled by searching them within constrained spaces. But such a constrained-spaces condition prevents one from conducting continuous gradient flow-based NTK analysis on that search process. We propose a general strategy to remove this condition from AT by introducing an additional learning rate term into the search process to control the strength of adversarial examples. With our solution, one can now characterize the behavior of DNNs in AT by directly studying the gradient flow descent in AT with NTK. Our theory then reveals a new AT degeneration phenomenon that we believe is the main cause of robust overfitting in DNNs. In detail, our theory suggests that the effect of AT on a DNN can be characterized by a regularization matrix introduced into the linearized closed-form AT dynamics, which however will gradually fade away in long-term AT. In other words, a long-term AT will result in an adversarially trained DNN degenerate to that obtained without AT, which thus explains why the DNN will lose its previously gained robustness. Based on our analysis, we further propose Adv-NTK, the first AT algorithm for infinite-width DNNs which improves network robustness by directly optimizing the introduced regularization matrix. Experiments on real-world datasets demonstrate that Adv-NTK can help infinite-width DNNs gain robustness that is comparable with finite-width ones, which in turn justifies our theoretical findings. In summary, our work has three main contributions: (1) We proved that a wide DNN in AT can be strictly approximated by a linearized DNN with closed-form AT dynamics. (2) Our theory reveals a novel AT degeneration phenomenon that theoretically explains robust overfitting of DNNs for the first time. (3) We designed Adv-NTK, the first AT algorithm for infinite-width DNNs. 2 RELATED WORKS Robust overfitting. Rice et al. (2020) first find this phenomenon in adversarially trained DNNs. A series of works then design various regularization techniques to mitigate it in practice (Zhang et al., 2021; Yu et al., 2022; Wu et al., 2020; Li & Spratling, 2023; Chen et al., 2021). Recent studies attempt to theoretically explain robust overfitting. Donhauser et al. (2021) and Hassani & Javanmard (2022) show that the robust generalizability follows a double descent phenomenon concerning the model scale. Wang et al. (2022) show that a two-layer neural network that is closed to the initialization and with frozen second-layer parameter is provably vulnerable to adversarial attacks. However, this result requires the inputs coming from the unit sphere, which is not realistic in the real world. Other advances include Zhu et al. (2022), Bombari et al. (2023), Bubeck et al. (2021), Zhang & Li (2023) and Clarysse et al. (2023). Since these works only focus on analyzing converged models, it remains unclear how robustness of DNN occurs and degrades during AT. Li & Li (2023) are the first that consider the AT evolution process in studying robust overfitting. Based on the feature learning theory, they find a two-layer CNN in AT will gradually memorize data-wise random noise in adversarial training data, which makes it difficult to generalize well on unseen adversarial data. However, their theory currently is only applicable to shallow networks, and could not explain why networks will lose previously acquired robustness with further AT. Neural tangent kernel (NTK). Jacot et al. (2018) show that for a wide neural network, its gradient descent dynamics can be described by a kernel, named neural tangent kernel (NTK). Based on NTK, the learning process of neural networks can be simplified as a linear kernel regression (Jacot et al., 2018; Lee et al., 2019), which makes NTK a suitable theoretical tool to analyze overparameterized models (Li & Liang, 2018; Zou et al., 2020; Allen-Zhu et al., 2019). Recent studies have extended NTK to various model architectures (Arora et al., 2019; Hron et al., 2020; Du et al., 2019a; Lee et al., 2022), and the theory itself helps understand deep learning from various aspects such as convergence (Du et al., 2019b; Cao & Gu, 2019), generalization (Lai et al., 2023; Huang et al., 2020; Chen et al., 2020; Barzilai et al., 2023; Hu et al., 2020), and trainability (Xiao et al., 2020). NTK has also been used to analyze AT on overparameterized models. Gao et al. (2019) and Zhang et al. (2020) study the convergence of overparameterized networks in AT and prove upper bounds on the time required for AT. More recent works empirically study robust overfitting with NTK. Tsilivis & Kempe (2022) use eigenspectrums of NTKs to identify robust or non-robust features. Loo et al. (2022) empirically show that a finite-width NTK in AT will rapidly converge to a kernel encodes robust features. But none of them provide theoretical explanations of robust overfitting. 3 PRELIMINARIES Notations. Let \( \otimes \) denotes Kronecker product, \( \text{Diag}(\cdot) \) denotes a diagonal matrix constructed from a given input, \( \partial(\cdot) \) denotes the Jacobian of a given function, \( \lambda_{\max}(\cdot) \) denotes the maximum eigenvalue of a given matrix, and \( I_n \) (\( n \in \mathbb{N}^+ \)) denotes an \( n \times n \) identity matrix. For a set of random variables \( X_n \) indexed by \( n \) and an additional random variable \( X \), we use \( X_n \xrightarrow{P} X \) to denote that \( X_n \) converges in probability to \( X \). See Appendices A.2 and A.3 for a full list of notations and definitions of convergence in probability and Lipschitz continuity/smoothness. Let \( D = \{(x_1, y_1), \cdots (x_M, y_M)\} \) be a dataset consists of \( M \) samples, where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^d \) is the \( i \)-th input feature vector and \( y_i \in \mathcal{Y} \subseteq \mathbb{R}^c \) is its label. A parameterized DNN is denoted as \( f(\theta, \cdot) : \mathcal{X} \to \mathcal{Y} \), where \( \theta \) is the model parameter. For simplicity, we let \( x := \oplus_{i=1}^{M} x_i \in \mathbb{R}^{Md} \) denotes the concatenation of inputs and \( y := \oplus_{i=1}^{M} y_i \in \mathbb{R}^{Mc} \) denotes the concatenation of labels. Thereby, the concatenation of \( f(\theta, x_1), \cdots f(\theta, x_M) \) can be further denoted as \( f(\theta, x) := \oplus_{i=1}^{M} f(x_i) \in \mathbb{R}^{Mc} \). Adversarial training (AT). Suppose \( L : \mathbb{R}^c \times \mathbb{R}^c \to \mathbb{R}^+ \) is a loss function. Then, a standard AT improves the robustness of DNNs against adversarial attacks by training them on most adversarial examples. Specifically, it aims to solve the following minimax problem (Madry et al., 2018), \[ \min_{\theta} \frac{1}{|D|} \sum_{(x_i, y_i) \in D} \max_{\|x'_i - x_i\| \leq \rho} L(f(\theta, x'_i), y_i), \] where \( \rho \in \mathbb{R} \) is the adversarial perturbation radius and \( x'_i \) is the most adversarial example within the ball sphere centered at \( x_i \). Intuitively, a large perturbation radius \( \rho \) would result in the final model achieving strong adversarial robustness. Neural tangent kernel (NTK). For a DNN \( f \) that is trained according to some iterative algorithm, let \( f_t := f(\theta_t, \cdot) \) where \( \theta_t \) is the DNN parameter obtained at the training time \( t \). Then, the empirical NTK of the DNN at time \( t \) is defined as below (Jacot et al., 2018), \[ \hat{\Theta}_{\theta,t}(x, x') := \partial_\theta f_t(x) \cdot \partial_\theta^T f_t(x') \in \mathbb{R}^{c \times c}, \quad \forall x, x' \in \mathcal{X}. \] In the rest of the paper, we will also use the notations \( \hat{\Theta}_{\theta,t}(x, x) := \partial_\theta f_t(x) \cdot \partial_\theta^T f_t(x) \in \mathbb{R}^{c \times Mc} \) and \( \hat{\Theta}_{\theta,t}(x, x) := \partial_\theta f_t(x) \cdot \partial_\theta^T f_t(x) \in \mathbb{R}^{Mc \times Mc} \). When \( f_t \) is trained via minimizing the empirical squared loss \( \sum_{(x_i, y_i) \in D} \frac{1}{2} \|f(x_i) - y_i\|^2_2 \), Lee et al. (2019) show that it can be approximated by the linearized DNN \( f_t^{\text{lin}} : \mathcal{X} \to \mathcal{Y} \) defined as follows, \[ f_t^{\text{lin}}(x) := f_0(x) - \hat{\Theta}_{\theta,0}(x, x) \cdot \hat{\Theta}_{\theta,0}^{-1}(x, x) \cdot \left(I - e^{-\hat{\Theta}_{\theta,0}(x, x) \cdot t}\right) \cdot (f_0(x) - y), \quad \forall x \in \mathcal{X}. \] Although the kernel function \( \hat{\Theta}_{\theta,t} \) depends on both the initial parameter \( \theta_0 \) and the time \( t \), Jacot et al. (2018) prove that \( \hat{\Theta}_{\theta,t} \xrightarrow{P} \Theta_\theta \) as the network widths go to infinity, where \( \Theta_\theta \) is a kernel function that is independent of \( \theta_0 \) and \( t \). Based on it, Lee et al. (2019) show that with infinite training time, the average output of infinite-width DNNs over random initialization will converge as follows, \[ \lim_{\text{widths} \to \infty} \lim_{t \to \infty} \mathbb{E}_{\theta_0}[f_t(x)] \xrightarrow{P} \Theta_\theta(x, x) \cdot \Theta_\theta^{-1}(x, x) \cdot y, \quad \forall x \in \mathcal{X}, \] where \( \Theta_\theta(x, x) \in \mathbb{R}^{c \times Mc} \) is an \( 1 \times M \) block matrix that the \( i \)-th column block is \( \Theta_\theta(x, x_i) \), and \( \Theta_\theta(x, x) \in \mathbb{R}^{Mc \times Mc} \) is an \( M \times M \) block matrix that the \( i \)-th row \( j \)-th column block is \( \Theta_\theta(x_i, x_j) \). 4 ADVERSARIAL TRAINING OF WIDE DNNs In this section, we present our main theoretical results that characterize AT dynamics of wide DNNs. We first introduce the DNN architectures that we are going to analyze. Suppose \( f(\theta, \cdot) \) is a DNN consisting of \( L + 1 \) fully connected layers, in which the width of the \( l \)-th hidden layer (\( 1 \leq l \leq L \)) is \( n_l \). Additionally, the input dimension and the output dimension are denoted as \( n_0 := d \) and \( n_{L+1} := c \) for simplicity. Then, the forward propagation in the \( l \)-th fully-connected layer (\( 1 \leq l \leq L + 1 \)) is calculated as follows, \[ h^{(l)}(x) = \frac{1}{\sqrt{n_{l-1}}} W^{(l)} \cdot x^{(l-1)}(x) + b^{(l)}, \quad x^{(l)}(x) = \phi(h^{(l)}(x)), \] where \( h^{(l)} \) and \( x^{(l)} \) are the pre-activated and post-activated functions at the \( l \)-th layer, \( W^{(l)} \in \mathbb{R}^{n_l \times n_{l-1}} \) is the \( l \)-th weight matrix, \( b^{(l)} \in \mathbb{R}^{n_l} \) is the \( l \)-th bias vector, and \( \phi \) is a point-wise activation function. The final DNN function is \( f(\theta, \cdot) := h^{(L+1)}(\cdot) \), with model parameter \( \theta := (W^{(1)}, \ldots, W^{(L+1)}, b^{(1)}, \ldots, b^{(L+1)}) \), and we use \( f_t := f(\theta_t, \cdot) \) denote the DNN at the training time \( t \). Finally, for the initialization of \( \theta_0 \), we draw each entry of weight matrices from a Gaussian \( \mathcal{N}(0, \sigma_W^2) \) and each entry of bias vectors from a Gaussian \( \mathcal{N}(0, \sigma_b^2) \). Similar to existing NTK literatures (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019), we are also interested in the linearized DNN \( f_t^{\text{lin}} \) defined as follows, \[ f_t^{\text{lin}}(x) = f_0(x) + \partial_\theta f_0(x) \cdot (\theta_t^{\text{lin}} - \theta_0), \quad \forall x \in \mathcal{X}, \] where \( \theta_0 \) is the initial parameter same as that of the non-linear DNN \( f_t \) and \( \theta_t^{\text{lin}} \) is the parameter of the linearized DNN at the training time \( t \). ### 4.1 Gradient Flow-based Adversarial Example Search To characterize the training dynamics of AT, the key step is to analyze the process of searching adversarial examples that will be used to calculate AT optimization directions. Recall Eq. (1), standard minimax AT will search adversarial examples within constrained spaces. However, analyzing such a process with continuous gradient flows is challenging due to the need to explicitly model the boundaries of those constrained spaces. To tackle the challenge, we notice that the main role of the constrained-spaces condition is to control the adversarial strength (i.e., the strength of the ability to make models misbehave) of searched data. As a solution, we suggest replacing the constrained-spaces condition (controlled by \( \rho \) in Eq. (1)) with an additional learning rate term (i.e., \( \eta_t(t) \) in Eq. (7)) to control the strength of adversarial examples. This modification will then enable a more convenient continuous gradient flow analysis. Specifically, for the DNN \( f_t \) at the training time \( t \), to find the corresponding adversarial example of the \( i \)-th training data point \((x_i, y_i)\) where \( 1 \leq i \leq M \), we will start from \( x_i \) and perform gradient flow ascent for a total of time \( S > 0 \), with an introduced learning rate \( \eta_t(t) : \mathbb{R} \to \mathbb{R} \) to control the adversarial strength of searched example at the current training time \( t \), as below, \[ \partial_s x_{i,t,s} = \eta_t(t) \cdot \partial_x^T f_t(x_{i,t,s}) \cdot \partial_f^T(x) L(f_t(x_{i,t,s}), y_i) \quad \text{s.t.} \quad x_{i,t,0} = x_i. \] Then, the final adversarial example is \( x_{i,t,S} \). The learning rate \( \eta_t(t) \) plays a similar role with the constrained-spaces condition in Eq. (1). Intuitively, a larger \( \eta_t(t) \) at the training time \( t \) corresponds to a more adversarial example \( x_{i,t,S} \). Besides, running the gradient flow defined in Eq. (7) also depends on the DNN output of which the evolution of \( f_t(x_{i,t,s}) \) concerning \( s \) can be formalized based on Eq. (7) as follows, \[ \partial_s f_t(x_{i,t,s}) = \partial_x f_t(x_{i,t,s}) \cdot \partial_s x_{i,t,s} = \eta_t(t) \cdot \hat{\Theta}_{x,t}(x_{i,t,s}, x_{i,t,s}) \cdot \partial_f^T(x) L(f_t(x_{i,t,s}), y_i), \] where \( \hat{\Theta}_{x,t} : \mathcal{X} \times \mathcal{X} \to \mathbb{R}^{c \times c} \) is a new kernel function named **Adversarial Regularization Kernel** (**ARK**) and defined as below, \[ \hat{\Theta}_{x,t}(x, x') := \partial_x f_t(x) \cdot \partial_x^T f_t(x'), \quad \forall x, x' \in \mathcal{X}. \] The ARK \( \hat{\Theta}_{x,t} \) shares similar structure with the NTK \( \hat{\Theta}_{\theta,t} \) defined in Eq. (2). The difference is that the kernel matrix \( \hat{\Theta}_{x,t} \) is calculated from Jacobians of DNN \( f_t \) concerning input, while the NTK \( \hat{\Theta}_{\theta,t} \) is calculated from Jacobians concerning the model parameter. ### 4.2 Adversarial Training Dynamics With the gradient flow-based adversarial example search in the previous section, we now formalize the gradient flow-based AT dynamics for the wide DNN \( f_t \) and the linearized DNN \( f_t^{\text{lin}} \) respectively. **AT dynamics of wide DNN \( f_t \)** Suppose \( f_t \) is trained via continuous gradient flow descent. Then, the evolution of model parameter \( \theta_t \) and model output \( f_t(x) \) are formalized as follows, \[ \partial_t \theta_t = -\partial_\theta^T f_t(x_{t,S}) \cdot \partial_f^T(x) L(f_t(x_{t,S}), y), \] \[ \partial_t f_t(x) = \partial_\theta f_t(x) \cdot \partial_t \theta_t = -\hat{\Theta}_{\theta,t}(x, x_{t,S}) \cdot \partial_f^T(x) L(f_t(x_{t,S}), y), \quad \forall x \in \mathcal{X}, \] where \( x_{t,S} := \oplus_{i=1}^{M} x_{i,t,S} \) is the concatenation of adversarial examples found at the current training time \( t \), and \( \hat{\Theta}_{\theta,t} \) is the empirical NTK defined in Eq. (2). Meanwhile, based on the method proposed in Section 4.1, the gradient flow-based search process for the concatenation of adversarial examples \( x_{t,S} \) is formalized as below, \[ \partial_s x_{t,s} = \partial_x^T f_t(x_{t,s}) \cdot \eta(t) \cdot \partial_f^T(x) L(f_t(x_{t,s}), y) \quad \text{s.t.} \quad x_{t,0} = x, \] \[ \partial_s f_t(x_{t,s}) = \partial_x f_t(x_{t,s}) \cdot \partial_s x_{t,s} = \hat{\Theta}_{x,t}(x_{t,s}, x_{t,s}) \cdot \eta(t) \cdot \partial_f^T(x) L(f_t(x_{t,s}), y), \] where \( x_{t,s} := \oplus_{i=1}^{M} x_{i,t,s} \) is the concatenation of intermediate adversarial training examples found at the search time \( s \), \( \eta(t) := \text{Diag}(\eta_1(t), \cdots, \eta_M(t)) \otimes I_c \in \mathbb{R}^{Mc \times Mc} \) is a block diagonal learning rate matrix, and \( \hat{\Theta}_{x,t}(x_{t,s}, x_{t,s}) := \text{Diag}(\hat{\Theta}_{x,t}(x_{1,t,s}, x_{1,t,s}), \cdots, \hat{\Theta}_{x,t}(x_{M,t,s}, x_{M,t,s})) \in \mathbb{R}^{Mc \times Mc} \) is a block diagonal matrix consists of ARKs. **AT dynamics of linearized wide DNN \( f_{\text{lin}}^t \).** Suppose \( f_{\text{lin}}^t \) is also trained via continuous gradient flow descent. Then, according to the definition of linearized DNN in Eq. (6), we have \( \partial_\theta f_{\text{lin}}^t := \partial_\theta f_0 \). Therefore, the evolution of parameter \( \theta_{\text{lin}}^t \) and output \( f_{\text{lin}}^t(x) \) are formalized as follows, \[ \partial_t \theta_{\text{lin}}^t = -\partial_\theta^T f_0(x) \cdot \partial_f^T(x) L(f_{\text{lin}}^t(x_{\text{lin}}), y), \] \[ \partial_t f_{\text{lin}}^t(x) = \partial_\theta f_0(x) \cdot \partial_\theta \theta_{\text{lin}}^t = -\hat{\Theta}_{\theta,0}(x, x) \cdot \partial_f^T(x) L(f_{\text{lin}}^t(x_{\text{lin}}), y), \quad \forall x \in X, \] where \( x_{\text{lin}}^t := \oplus_{i=1}^{M} x_{i,t,S}^{\text{lin}} \) is concatenation of the adversarial examples found for the linearized DNN \( f_{\text{lin}}^t \), and \( \hat{\Theta}_{\theta,0} \) is the empirical NTK (see Eq. (2)) at initialization. The search of \( x_{\text{lin}}^t \) is slightly different from the gradient flow-based method in Section 4.1. When following Eqs. (7) and (8) to search \( x_{\text{lin}}^t \), one needs to calculate an intractable Jacobian \( \partial_x f_{\text{lin}}^t(x_{\text{lin}}^t) = \partial_x f_0(x_{\text{lin}}^t) + \partial_x (\partial_\theta f_0(x_{\text{lin}}^t)(\theta_t - \theta_0)) \). To further simplify our analysis, we note that in standard training, a wide DNN is approximately linear concerning model parameters, thus it is also reasonable to deduce that a wide DNN in AT is approximately linear concerning slightly perturbed adversarial inputs. In other words, we deduce that \( \partial_x f_{\text{lin}}^t(x_{\text{lin}}^t) \approx \partial_x f_0(x_{\text{lin}}^t) + 0 \approx \partial_x f_0(x_i) \). Thus, we propose to replace \( \partial_x f_{\text{lin}}^t(x_{\text{lin}}^t) \) with \( \partial_x f_0(x_i) \) in the search of \( x_{\text{lin}}^t \). Then, by replacing \( \partial_x f_{\text{lin}}^t(x_{\text{lin}}^t) \) with \( \partial_x f_0(x_i) \) in Eqs.(7) and (8), the overall search process for \( x_{\text{lin}}^t \) in the linearized AT dynamics is formalized as below, \[ \partial_s x_{\text{lin}}^t = \partial_x^T f_0(x) \cdot \eta(t) \cdot \partial_f^T(x) L(f_t(x_{\text{lin}}^t), y) \quad \text{s.t.} \quad x_{\text{lin}}^t,0 = x, \] \[ \partial_s f_t(x_{\text{lin}}^t) = \partial_x f_0(x) \cdot \partial_s x_{\text{lin}}^t = \hat{\Theta}_{x,0}(x, x) \cdot \eta(t) \cdot \partial_f^T(x) L(f_t(x_{\text{lin}}^t), y), \] where \( x_{\text{lin}}^t := \oplus_{i=1}^{M} x_{i,t,s}^{\text{lin}} \) is the concatenation of intermediate adversarial examples for the linearized DNN \( f_{\text{lin}}^t \), the learning rate matrix \( \eta(t) \) is same as that in the AT dynamics of \( f_t \), and \( \hat{\Theta}_{x,0}(x, x) := \text{Diag}(\hat{\Theta}_{x,0}(x_1, x_1), \cdots, \hat{\Theta}_{x,0}(x_M, x_M)) \) is a block matrix consists of ARKs at initialization. ### 4.3 Adversarial Training in Infinite-Width This section theoretically characterizes the AT dynamics of the DNN \( f_t \) when the network widths approach the infinite limit. We first prove the kernel limits at initialization as Theorem 1. **Theorem 1** (Kernels limits at initialization; Informal version of Theorem B.1). Suppose \( f_0 \) is an MLP defined and initialized as in Section 4. Then, for any \( x, x' \in X \) we have \[ \lim_{n_L \to \infty} \cdots \lim_{n_0 \to \infty} \hat{\Theta}_{\theta,0}(x, x') = \Theta_{\theta}(x, x') := \Theta_{\theta}^\infty(x, x') \cdot I_{n_L+1}, \] \[ \lim_{n_L \to \infty} \cdots \lim_{n_0 \to \infty} \hat{\Theta}_{x,0}(x, x') = \Theta_{x}(x, x') := \Theta_{x}^\infty(x, x') \cdot I_{n_L+1}, \] where \( \Theta_{\theta}^\infty : X \times X \to \mathbb{R} \) and \( \Theta_{x}^\infty : X \times X \to \mathbb{R} \) are two deterministic kernel functions. The proof is given in Appendix B. **Remark 1.** The convergence of NTK \( \hat{\Theta}_{\theta,0} \) is first proved in Jacot et al. (2018). We restate it for the sake of completeness. Note that the limit of NTK \( \hat{\Theta}_{\theta,0} \) in AT is the same as that in standard training. We then prove that a wide DNN $f_t$ can be approximated by its linearized counterpart $f_t^{\text{lin}}$, as shown in Theorem 2. It relies on the following Assumptions 1-4. **Assumption 1.** The activation function $\phi : \mathbb{R} \to \mathbb{R}$ is twice-differentiable, $K$-Lipschitz continuous, $K$-Lipschitz smooth, and satisfies $|\phi(0)| < +\infty$. **Assumption 2.** For any fixed $T > 0$, we have that $\int_0^T \| \partial f(x) L(f(x_{t,s}), y) \|_2 dt = O_p(1)$, $\sup_{t \in [0,T]} \int_0^S \| \partial f(x) L(f(x_{t,s}), y) \|_2 ds = O_p(1)$, and $\sup_{t \in [0,T]} \int_0^S \| \partial_t \partial f(x) L(f(x_{t,s}), y) \|_2 ds = O_p(1)$ as $\min\{n_0, \cdots, n_L\} \to \infty$. **Assumption 3.** $\eta(t)$ and $\partial_t \eta(t)$ are continuous on $[0, +\infty)$. **Assumption 4.** The loss function $L : Y \times Y \to \mathbb{R}$ is $K$-Lipschitz smooth. Assumptions 1 and 4 are commonly used in existing NTK literatures (Jacot et al., 2018; Lee et al., 2019; 2022). Assumption 3 is mild. Assumption 2 assumes that the cumulated perturbation loss directions as well as AT loss directions are stochastically bounded. Similar assumptions are also widely adopted in NTK studies for standard training. **Theorem 2 (Equivalence between wide DNN and linearized DNN).** Suppose Assumptions 1-4 hold, and $f_t$ and $f_t^{\text{lin}}$ are trained following the AT dynamics formalized in Section 4.2. Then, if there exists $\tilde{n} \in \mathbb{N}^+$ such that $\min\{n_0, \cdots, n_L\} \geq \tilde{n}$ always holds, we have for any $x \in X$, as $\tilde{n} \to \infty$, $$\lim_{n_L \to \infty} \cdots \lim_{n_0 \to \infty} \sup_{t \in [0,T]} \| f_t(x) - f_t^{\text{lin}}(x) \|_2 \overset{P}{\to} 0.$$ The overall proof is presented in Appendix C. **Remark 2.** Although our AT dynamics is formed based on the intuition that wide DNNs could be linear concerning slightly perturbed inputs, Theorem 2 does not depend on this intuition. It mainly depends on the large-width condition and also holds when large perturbations present. Finally, we calculate the closed-form AT dynamics for the linearized DNN $f_t^{\text{lin}}$ (Theorem 3) as well as the infinite-width DNN $f_t$ (Corollary 1) when squared loss $L(f(x), y) := \frac{1}{2} \| f(x) - y \|_2^2$ is used. **Theorem 3 (Close-form AT-dynamics of $f_t^{\text{lin}}$ under squared loss).** Suppose Assumption 3 holds and the linearized DNN $f_t^{\text{lin}}$ is trained following the AT dynamics formalized in Section 4.2 with squared loss $L(f(x), y) := \frac{1}{2} \| f(x) - y \|_2^2$. Then, for any $x \in X$, we have $$f_t^{\text{lin}}(x) = f_0(x) - \hat{\Theta}_{\theta,0}(x,x) \cdot \hat{\Theta}_{\theta,0}^{-1}(x,x) \cdot \left( I - e^{-\hat{\Theta}_{\theta,0}(x,x) \cdot \Xi(t)} \right) \cdot (f_0(x) - y),$$ where $\Xi(t) := \text{Diag}(\{\int_0^t \exp(\hat{\Theta}_{\theta,0}(x_i, x_i) \cdot \eta_h(\tau) \cdot S) d\tau\}_{i=1}^M) \in \mathbb{R}^{Mc \times Mc}$ is a regularization matrix. The proof is given in Appendix D. **Corollary 1.** Suppose all conditions in Theorems 1, 2, 3 hold. Then, if there exists $\tilde{n} \in \mathbb{N}^+$ such that $\min\{n_0, \cdots, n_L\} \geq \tilde{n}$ always holds, we have for any $x \in X$, as $\tilde{n} \to \infty$, $$\lim_{n_L \to \infty} \cdots \lim_{n_0 \to \infty} \{ f_t(x), f_t^{\text{lin}}(x) \} \overset{P}{\to} f_t^\infty(x),$$ and $$\mathbb{E}_{\theta_0}[f_t^\infty(x)] = \Theta_{\theta}(x,x) \cdot \Theta_{\theta}^{-1}(x,x) \cdot \left( I - e^{-\Theta_{\theta}(x,x) \cdot \Xi(t)} \right) \cdot y,$$ where $\Xi(t) := \text{Diag}(\{\int_0^t \exp(\Theta_{\theta}(x_i, x_i) \cdot \eta_h(\tau) \cdot S) d\tau\}_{i=1}^M) \in \mathbb{R}^{Mc \times Mc}$ is a diagonal regularization matrix, and $\Theta_{\theta}$ and $\Theta_x$ are kernel functions in the infinite-width limit. **Proof.** The proof is completed by adopting Theorems 1, 2 and Lemma B.1 into Theorem 3. □ **Remark 3.** Recall Remark 1, the infinite-width NTK function $\Theta_{\theta}$ in AT is exactly the same as that in standard training. As a result, in practice $\Theta_{\theta}$ can be calculated by using the Neural-Tangents Python Library (Novak et al., 2020) and the JAX autograd system (Bradbury et al., 2018). ### 5 ROBUST OVERFITTING IN WIDE DNNs So far, we have shown that a wide DNN that is adversarially trained with squared loss can be approximated by its linearized counterpart which admits a closed-form AT dynamics. Now, we leverage our theory to theoretically understand and mitigate robust overfitting in wide DNNs. Throughout this section, the loss function is assumed to be the squared loss $L(f(x), y) := \frac{1}{2} \| f(x) - y \|_2^2$. 5.1 AT Degeneration Leads to Robust Overfitting This section reveals a novel AT degeneration phenomenon that theoretically explains the mechanism behind deep robust overfitting. Compared with existing theoretical studies on robust overfitting (Donhauser et al., 2021; Bombari et al., 2023; Zhang & Li, 2023; Clarysse et al., 2023; Li & Li, 2023), our result has two significant advantages: (1) it can explain why the gained robustness will gradually lose in long-term AT, and (2) it applies to general deep neural network models. We propose to study the AT dynamics of the linearized DNN instead of the original DNN since it is proved in Theorem 2 that a wide DNN can be approximated by the linearized one. Comparing the closed-form dynamics of the linearized DNN in AT (Eq. (18) in Theorem 3) with that in standard training (Eq. (3)), one can find that the difference is AT introduces a time-dependent regularization matrix $\hat{\Xi}(t)$ (Theorem 3) into the closed-form dynamics of standard training. Thus, it can be deduced that the introduced matrix $\hat{\Xi}(t)$ fully captures the adversarial robustness of DNNs brought by AT. To answer why the robustness captured by $\hat{\Xi}(t)$ will gradually degrade in long-term AT, without loss of generality, we first assume that the ARK $\hat{\Theta}_{x,0}(x, x)$ is positive definite. Thereby, it can be decomposed as $\hat{\Theta}_{x,0}(x, x) := QDQ^T$, where $Q$ is a block diagonal matrix consists of orthogonal blocks and $D$ is a diagonal matrix consists of positive diagonal entries. Since $\eta(t)$ is commutative with $\hat{\Theta}_{x,0}(x, x)$ and thus also with $Q$, the matrix $\hat{\Xi}(t)$ can be further decomposed as follows, $$\hat{\Xi}(t) = \int_0^t \exp(QDQ^T \cdot \eta(\tau) \cdot S) d\tau = \int_0^t Q \exp(D\eta(\tau)S)Q^T d\tau = QA(t)Q^T \cdot a(t),$$ where $a(t) := \lambda_{\max}\{\int_0^t \exp(D\eta(\tau)S) d\tau\}$ is a strictly increasing scale function and $A(t) := \frac{1}{a(t)} \int_0^t \exp(D\eta(\tau)S) d\tau$ is a matrix that $\sup_{t \geq 0} \|QA(t)Q^T\|_2 \leq 1$. The here is to decouple the unbounded $a(t)$ from $\hat{\Xi}(t)$ and remain others being bounded, which can simplify our analysis. Then, for the exponential term in the AT dynamics in Eq. (18), substituting $\hat{\Xi}(t)$ and we have $$e^{-\hat{\Theta}_{\theta,0}(x,x)\hat{\Xi}(t)} = QA(t)^{-\frac{1}{2}} \cdot \exp(-A(t)^{\frac{1}{2}}Q^T \cdot \hat{\Theta}_{\theta,0}(x,x) \cdot QA(t)^{\frac{1}{2}} \cdot a(t)) \cdot A(t)^{\frac{1}{2}}Q^T,$$ where the calculation details is given in Appendix E.1. We further assume that: (1) the adversarial perturbation scale is small enough such that the symmetric $A(\infty)^{\frac{1}{2}}Q^T \Theta_{\theta,0}(x,x)QA(\infty)^{\frac{1}{2}}$ stays positive definite, and (2) $a(t) \to \infty$ as $t \to \infty$. Under the first assumption, we have $A(\infty)^{\frac{1}{2}}Q^T \Theta_{\theta,0}(x,x)QA(\infty)^{\frac{1}{2}} := Q'D'Q^T$ where $Q'$ is an orthogonal matrix and $D'$ is a diagonal matrix consisting of positive diagonal entries. Combined with the second one, we have $$\exp(-A(\infty)^{\frac{1}{2}}Q^T \cdot \hat{\Theta}_{\theta,0}(x,x) \cdot QA(\infty)^{\frac{1}{2}} \cdot a(\infty)) = Q'e^{-D'a(\infty)Q^T} = 0,$$ where the calculation is presented in Appendix E.1. As a result, $$\lim_{t \to \infty} e^{-\hat{\Theta}_{\theta,0}(x,x)\hat{\Xi}(t)} = QA(\infty)^{-\frac{1}{2}} \cdot 0 \cdot A(\infty)^{\frac{1}{2}}Q^T = 0.$$ Eq. (23) indicates that in a long-term AT, the regularization matrix $\hat{\Xi}(t)$ which captures robustness brought by AT will gradually fade away. Moreover, when at the infinite training time limit, the AT dynamics will converge to $f_0(x) - \hat{\Theta}_{\theta,0}(x,x) \cdot \hat{\Theta}_{\theta,0}(x,x) \cdot (f_0(x) - y)$, which is exactly the same limit as that of the standard training dynamics given in Eq. (3). Notice that the analysis up to now relies on that the adversarial perturbation is small such that the matrix in Eq. (21) is positive definite when $t = \infty$. Please refer to Appendix E.2 for further discussion when the perturbation is large. In conclusion, the analysis in this section suggests a novel AT degeneration phenomenon that in a long-term AT, the impact brought by AT will graduate disappear and the adversarially trained DNN will eventually degenerate to that obtained without AT. The AT degeneration phenomenon clearly illustrates the mechanism behind robust overfitting in DNNs. It can also explain the empirical finding in Rice et al. (2020) that early-stop can significantly mitigate robust overfitting: it is because early-stop can effectively preserve the regularization matrix $\hat{\Xi}(t)$ brought by AT. 5.2 Infinite Width Adversarial Training We have known that the robustness brought by AT can be characterized by $\hat{\Xi}(t)$ defined in Theorem 3. Then, it is natural to ask if one can directly optimize $\hat{\Xi}(t)$ to mitigate robust overfitting. Since Algorithm 1 Adv-NTK (Solving Eq. (25) with SGD and GradNorm) Input: Training set \( D \), validation set size \( M_{\text{val}} \), learning rate \( \zeta \), training iteration \( T \), PGD function for finding adversarial validation data. Output: An infinite-width adversarially robust DNN. 1: Randomly separate \( D \) into subsets \( D_{\text{opt}} \) and \( D_{\text{val}} \) such that \( |D_{\text{val}}| = M_{\text{val}} \). 2: Initialize trainable parameter \( \varpi_0 \in \mathbb{R}^{|D_{\text{val}}| \cdot c} \) with zeros. 3: for \( t \) in 1, \(\cdots\), \( T \) do 4: Sample a minibatch \((x, y) \sim D_{\text{val}}\). 5: \( x' \leftarrow \text{PGD}(x, y, f_{\varpi_{t-1}}) \) \(\triangleright\) Finding adversarial validation examples. 6: \( g_t \leftarrow \partial_{\varpi} \frac{1}{2} \| f_{\varpi_{t-1}}(x') - y \|_2^2 \) 7: \( \varpi_t \leftarrow \varpi_{t-1} - \zeta \cdot \frac{g_t}{\| g_t \|_2} \) \(\triangleright\) Update model parameter via SGD and \( \ell_2 \)-GradNorm. 8: end for 9: return \( f_{\varpi_T} \) \( \Xi(t) \) is a \( Mc \times Mc \) block diagonal matrix, optimizing it requires maintaining \( Mc^2 \) variables and is computationally costly. Fortunately, Corollary 1 indicates that in the infinite-width limit, matrix \( \Xi(t) \) will converge to a diagonal matrix \( \Xi(t) \) where only \( Mc \) variables need to be maintained. Based on the observation, we propose Adv-NTK, the first AT algorithm for infinite-width DNNs. Specifically, for a given training set \( D \), we separate it into two disjoint subsets \( D_{\text{opt}} \) and \( D_{\text{val}} \), in which \( D_{\text{opt}} \) is for constructing the infinite width DNN while \( D_{\text{val}} \) is a validation set for model selection. Then, the infinite-width DNN that will be trained in Adv-NTK is constructed as follows based on the infinite-width DNN defined as Eq. (4) in Corollary 1, \[ f_{\varpi}(x) = \Theta_\theta(x, x_{\text{opt}}) \cdot \Theta_\theta^{-1}(x_{\text{opt}}, x_{\text{opt}}) \cdot \left( I - e^{-\Theta_\theta(x_{\text{opt}}, x_{\text{opt}}) \cdot \text{Diag}(\varpi)} \right) \cdot y_{\text{opt}}, \quad \forall x \in X, \] where \( \varpi \in \mathbb{R}^{|D_{\text{opt}}| \cdot c} \) is the trainable parameter in Adv-NTK, \( \Theta_\theta \) is the NTK function at the infinite-width limit (see Theorem 1), and \( x_{\text{opt}} \) and \( y_{\text{opt}} \) are concatenations of features and labels in the subset \( D_{\text{opt}} \). Note that the parameter \( \varpi \) consists exactly of the diagonal entries of the diagonal matrix \( \Xi(t) \). Then, the Adv-NTK algorithm aims to enhance the adversarial robustness of the infinite-width DNN \( f_{\varpi} \) via solving the following minimax optimization problem, \[ \min_{\varpi} \frac{1}{|D_{\text{val}}|} \sum_{(x, y) \in D_{\text{val}}} \max_{\| x' - x \| \leq \rho} \frac{1}{2} \| f_{\varpi}(x') - y \|_2^2, \] where \( \rho > 0 \) is the same adversarial perturbation radius as that in the standard AT (see Eq. (1)), and the inner maximization problem can be solved via projected gradient descent (PGD) (Madry et al., 2018). The above Eq. (25) shares similar idea with the early stop method (Rice et al., 2020): they both use a validation set for model selection. The difference is that early stop uses model robust accuracy on the validation set as an indicator to select the model parameter indirectly, while Eq. (25) directly optimizes the model parameter with the validation set. Finally, to further improve the training stability, Adv-NTK leverages stochastic gradient descent (SGD) and gradient normalization (GradNorm) to solve Eq. (25). The overall procedures are presented as Algorithm 1. 6 Empirical Analysis of Adv-NTK This section empirically verifies the effectiveness of Adv-NTK on the CIFAR-10 (Krizhevsky et al., 2009) dataset. We briefly introduce the experiment and leave details in Appendix F. Please also refer to Appendix F.4 for analogous experiments on SVHN (Netzer et al., 2011) dataset. Loss & Dataset & adversarial perturbation. Squared loss \( L(f(x), y) := \frac{1}{2} \| f(x) - y \|_2^2 \) is used in all experiments. In every experiment, we randomly draw 12,000 samples from the trainset for model training and use the whole test set to evaluate the robust generalization ability of the model. Projected gradient descent (PGD; Madry et al. (2018)) is used to perform adversarial perturbations in both training and evaluation. We adopt \( \ell_\infty \)-perturbation with radius \( \rho \in \{4/255, 8/255\} \). Baseline methods. We adopt two existing methods for comparison. They are: (1) AT, which aims to enhance the robustness of finite-width DNNs via solving the minimax problem in Eq. (1), and (2) NTK, which directly obtains closed-form infinite-width DNNs from Eq. (4) without training. Table 1: Robust test accuracy (%) of models trained with different methods on CIFAR-10. Every experiment is repeated 3 times. A high robust test accuracy suggests a strong robust generalizability. | Depth | Adv. Acc. ($\ell_\infty; \rho = 4/255$) (%) | Adv. NTK (Ours) | Adv. Acc. ($\ell_\infty; \rho = 8/255$) (%) | |-------|------------------------------------------|----------------|------------------------------------------| | | AT | NTK | AT | NTK | Adv. NTK (Ours) | | MLP-x | 3 | 30.64±0.42 | 9.93±0.19 | 27.35±0.66 | 26.93±0.07 | 2.81±0.27 | 23.45±0.80 | | | 4 | 30.35±0.09 | 13.67±0.20 | 28.47±0.62 | 26.44±0.39 | 3.61±0.09 | 23.01±0.24 | | | 5 | 28.70±0.45 | 16.24±0.26 | 29.04±0.38 | 21.05±0.21 | 4.74±0.43 | 21.90±0.60 | | CIFAR-10 | 8 | 10.00±0.00 | 22.44±0.27 | 30.56±0.48 | 10.00±0.00 | 8.23±0.15 | 20.91±0.72 | | (Subset 12K) | 10 | 10.00±0.00 | 24.43±0.37 | 30.91±0.12 | 10.00±0.00 | 10.04±0.25 | 20.21±0.21 | | CNN-x | 3 | 13.20±0.40 | 5.01±0.54 | 29.31±0.61 | 12.62±1.78 | 1.31±0.03 | 26.79±2.25 | | | 4 | 19.30±0.26 | 6.23±0.69 | 31.04±0.55 | 10.39±0.20 | 1.68±0.14 | 25.57±0.56 | | | 5 | 20.10±1.32 | 7.99±0.37 | 30.46±0.59 | 11.12±0.14 | 1.65±0.07 | 23.48±0.48 | | CIFAR-10 | 8 | 12.68±4.64 | 13.07±0.26 | 28.26±0.54 | 10.00±0.00 | 2.55±0.18 | 16.14±0.83 | | (Subset 12K) | 10 | 10.00±0.00 | 16.02±0.50 | 26.61±0.41 | 10.00±0.00 | 3.50±0.09 | 13.13±0.28 | Figure 1: The robust test accuracy curves of finite-width MLP-5/CNN-5 along AT on CIFAR-10. The robust test accuracy of infinite width DNNs learned by NTK and Adv-NTK are also plotted. Model architectures. We study two types of multi-layer DNNs, MLPs and CNNs. Although our theory is originally for MLPs, it can be generalized for CNNs. We use “MLP-x” and “CNN-x” to denote an MLP consisting of $x$ fully-connected (FC) layers and CNN consists $x$ convolutional layers and one FC layer, respectively. The architecture depth $x$ is choosen from the set $\{3, 4, 5, 8, 10\}$. Model training. For Adv-NTK, we use 10,000 data to construct the infinite-width DNN defined in Eq. (24) and 2,000 data as the validation data for model training. For AT, we use the overall 12,000 data to train the model following Eq. (1). For NTK, there is no need for model training and we use the overall 12,000 data to construct the closed-form infinite-width DNN defined in Eq. (4). Results. The robust test accuracy of models trained with different methods on CIFAR-10 is reported in Table 1. We have two observations: Firstly, Adv-NTK achieves significantly higher robust test accuracy than NTK in almost every experiment, which suggests that Adv-NTK can improve the robustness of infinite-width DNNs and $\Xi(t)$ indeed captures robustness brought by AT. Secondly, in some experiments, Adv-NTK achieves higher performance than AT, which suggests that Adv-NTK has the potential to be used as an empirical tool to study adversarial robustness. In summary, these results not only indicate the effectiveness of our algorithm but also justify our theoretical findings. We further plot the curves of robust test accuracy of finite-width DNNs along AT, as shown in Fig. 1. We have two observations: Firstly, in most of the cases, the robust test accuracy will first rapidly increase and then slowly decrease, which illustrates a clear robust overfitting phenomenon. Similar results with larger models and longer AT can also be found in Rice et al. (2020). Secondly, although Adv-NTK can achieve comparable or higher performance than the final model obtained by AT, it could not beat the best model during AT. We deduce that it is because the non-linearity of finite-width DNNs in AT can capture additional robustness, which will be left for future studies. 7 CONCLUSIONS This paper presents a novel theoretical analysis of the robust overfitting of DNNs. By extending the NTK theory, we proved that a wide DNN in AT can be strictly approximated by its linearized counterpart, and also calculated the closed-form AT dynamics of the linearized DNN when the squared loss is used. Based on our theory, we suggested analyzing robust overfitting of DNNs with the closed-form AT dynamics of linearized DNNs and revealed a novel AT degeneration phenomenon that a DNN in long-term AT will gradually degenerate to that obtained without AT. Further, we designed the first AT algorithm for infinite-width DNNs, named Adv-NTK, by directly optimizing the regularization brought by AT. Empirical studies verified the effectiveness of our proposed method. ACKNOWLEDGEMENTS Di Wang and Shaopeng Fu are supported in part by the baseline funding BAS/1/1689-01-01, funding from the CRG grand URF/1/4663-01-01, FCC/1/1976-49-01 from CBRC, and funding from the AI Initiative REI/1/4811-10-01 of King Abdullah University of Science and Technology (KAUST). They are also supported by the funding of the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI). REFERENCES Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. In *International Conference on Machine Learning*, pp. 242–252. PMLR, 2019. Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. *Advances in Neural Information Processing Systems*, 32, 2019. Daniel Barzilai, Amnon Geifman, Meirav Galun, and Ronen Basri. A kernel perspective of skip connections in convolutional networks. In *International Conference on Learning Representations*, 2023. Richard Bellman. The stability of solutions of linear differential equations. *Duke Math. J.*, 10(1): 643–647, 1943. Simone Bombari, Shayan Kiyani, and Marco Mondelli. Beyond the universal law of robustness: Sharper laws for random features and neural tangent kernels. In *International Conference on Machine Learning*, volume 202, pp. 2738–2776. PMLR, 2023. James Bradbury, Roy Frostig, Matthew Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: Composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. Sebastien Bubeck, Yuanzhi Li, and Dheeraj M Nagaraj. A law of robustness for two-layers neural networks. In *Conference on Learning Theory*, volume 134 of *Proceedings of Machine Learning Research*, pp. 804–820. PMLR, 2021. Yuan Cao and Quanquan Gu. Generalization bounds of stochastic gradient descent for wide and deep neural networks. *Advances in Neural Information Processing Systems*, 32, 2019. Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, and Zhangyang Wang. Robust overfitting may be mitigated by properly learned smoothening. In *International Conference on Learning Representations*, 2021. Zixiang Chen, Yuan Cao, Quanquan Gu, and Tong Zhang. A generalized neural tangent kernel analysis for two-layer neural networks. *Advances in Neural Information Processing Systems*, 33: 13363–13373, 2020. Jacob Clarysse, Julia Hörrmann, and Fanny Yang. Why adversarial training can hurt robust accuracy. In *International Conference on Learning Representations*, 2023. Konstantin Donhauser, Alexandru Tifrea, Michael Aerni, Reinhard Heckel, and Fanny Yang. Interpolation can hurt robust generalization even when there is no noise. *Advances in Neural Information Processing Systems*, 34:23465–23477, 2021. Simon S Du, Kangcheng Hou, Russ R Salakhutdinov, Barnabas Poczos, Ruosong Wang, and Keyulu Xu. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. *Advances in Neural Information Processing Systems*, 32, 2019a. Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In *International Conference on Learning Representations*, 2019b.
dnaCBAP7X2
The authors mentioned, “we simply increase the number of fingerprint class |y_i| = 4” in section 4.3. Does that mean, for every model g_i, the investigator has multiple triggers, each corresponding with a different fingerprint class?
AN IMPLICIT WATERMARK FRAMEWORK FOR ADVERSARY IDENTIFICATION Anonymous authors Paper under double-blind review ABSTRACT Security of deep neural networks based machine learning systems has been an emerging research topic, especially after the discovery of adversarial attacks. In general, however, it is very difficult to build a machine learning system that is resistant to different types of attacks. Instead of directly improving the robustness of neural networks, Cheng et al. (2023) proposed the first framework to trace the first compromised model under the black-box adversarial attack in a forensic view. However, the black-box assumption has limited the usage of the framework since users will require detailed model information to facilitate their own use in the modern MLaaS system. In this paper, instead of considering the limited black-box attacks, we investigate more general and harder white-box setting where all users will have full access to model. Explicit modification on the model architecture during the inference will be no longer effective because those mechanisms could be easily bypassed by adversary. To address this challenge, a novel identification framework is proposed that can achieve high tracking accuracy to trace the source of white-box adversarial attack. Specifically, to differentiate adversarial examples generated from different copies, we first design an implicit watermark from backdooring before the model distribution. Then we design a data-free method to identify the adversary with only adversarial example available. Extensive experiments on different attacks including both white-box and black-box attacks, datasets, and model architectures verify the effectiveness of the proposed method. Our code will be made publicly available. 1 INTRODUCTION Since neural networks were shown vulnerable to adversarial attacks (Szegedy et al., 2013), the security problem of deep neural networks has attracted more and more attention as deep learning has been shown successful in a wide range of applications. To alleviate the threat of adversarial attack, lots of methods have been proposed to improve the robustness of models (Cheng et al., 2020; Madry et al., 2017; Zhang et al., 2019; Thulasidasan et al., 2019). However, they suffer from trade-offs with test accuracy on clean data, making the robust models hard for deploying in real world applications. Recently, Cheng et al. (2023) proposes a new task to find the source model copy for generating the adversarial attack where one of model copies in the MLaaS system is compromised by the adversary to generate transferable adversarial examples that could subsequently affect other devices in the same system. The goal for the task is to find the first compromised copy by only investigating the generated adversarial example. Through embedding different mask-based watermark during the inference procedure, they propose an identification framework to trace the first compromised model copy with adversarial examples in the black-box setting. While their proposed framework mainly considers the attacker in the black-box setting where the attacker could only query the model output, however, in many real-world systems like hugging face and large foundation models, users could have access detailed information about the model (i.e., model architecture and parameters) so that they can further improve the model performance with their own local data. Meanwhile, the mask-based watermark can be bypassed entirely by building surrogate models and adopt transfer attack to generate adversarial examples. In this paper, we make the first attempt to address the problem that how to identify the possible adversary among different users when all users have full information about the models, i.e. under the white-box setting. Under the white-box setting, the provider couldn’t add any modules to the models to facilitate the identification like Cheng et al. (2023). It is because the adversary could bypass any explicit modifications on the model architectures or inference procedure by designing adaptive attacks as they have already known the existence of the module. To solve this problem, we propose to design a robust implicit watermarking scheme to conduct adversarial investigation. For every model copy, we insert the implicit watermark by building some fingerprint data points and mix it into the training procedure. That is, the inserted watermark is hidden in the model weight before providing models to customers. Specifically, our implicit watermarking would lead the adversarial attack to generate the perturbation on the designated region preferential than other areas. This makes adversarial examples generated by different model copy unique so that we are able to design a novel data-free method to identify the adversary given only one adversarial example. Extensive experiments have been conducted to verify the effectiveness of the proposed framework. To further test the robustness of the proposed watermarking scheme, we also test several adaptive attacks to erase the proposed watermarking and our proposed scheme is robust against those attacks. Our contributions can be summarized as follows: - We propose a new forensic investigation framework to trace the adversary from a single adversarial example. Our new framework allows a more general and challenging setting where the adversary has full access to the model. - To trace the compromised model copy without original examples, we design two simple yet effective metrics to achieve successful adversary identification. - Extensive experiments are conducted to verify the efficiency and effectiveness of the proposed framework on various attacks, datasets, and model architectures. The results show that the proposed method can achieve high accuracy in different scenarios. 2 RELATED WORK Adversarial attack Since the finding of adversarial examples (Szegedy et al., 2013), adversarial attacks have attracted much attention due to their potential threats to real-world applications. Adversarial attacks can be generally classified as white-box attacks and black-box attacks based on the information that the adversary can obtain. For white-box attacks, the attacker has full information about the model including model architectures and parameters. Hence the adversary can easily compute the gradient to conduct the attack (Carlini & Wagner, 2017; Goodfellow et al., 2014; Madry et al., 2017). For black-box attacks, the attacker can only query the output given input. Depending on if the output probability is given, black-box attacks can be divided into soft-label attacks and hard-label attacks. Without any information about the internal information of models, black-box attacks aim to estimate gradient information (Chen et al., 2020; Ilyas et al., 2018). From the view of the adversary, white-box attacks would be easier to be conducted compared to black-box attacks since the gradient information can be directly computed by model parameters. From the view of defender or forensic investigator, however, adversarial examples generated by white-box attacks would be more difficult to identify since any explicit modifications to the model would be bypassed. Forensic investigation of adversary There are few studies on the forensic investigation of adversarial examples. Cheng et al. (2023) first proposed a watermarking method to trace the adversarial examples generated by black-box attacks, where an mask-based watermarking module is introduced to assign a unique fingerprint for every model copy. However, the method is constrained to applications that do not require any model information since they made explicit modifications to model architectures. In this paper, we consider the white-box attack case in which any explicit modifications to model copies are forbidden. To address the identification problem in the white-box case, we propose a novel framework that inserts implicit backdoors into model copies and is able to identify the adversary with high accuracy given only one adversarial example. 3 METHODOLOGY 3.1 PROBLEM SETTING Following the forensic investigation setting in Cheng et al. (2023), the machine learning service provider (i.e., the owner) owns \( n \) copies of models \( g_1, g_2, \ldots, g_i, \ldots, g_n \) that are trained for the same $K$-way classification task on the same dataset. Because of the need for model customization and performance concern, these model copies are then distributed to $n$ different users so that users will have full access to model copies, including model architectures and parameters. For example, the model provider such as Hugging Face provides pre-trained models or large foundation model for users to further customize their own model. All model details including model architecture and weights would be available to the users. Let $g_i(\cdot) \in \mathbb{R}^K$ denote the logit output of copy $g_i$ given input, and $\sigma(g_i(\cdot)) \in \mathbb{R}^K$ denote the output probabilities vector of copy $g_i$, where $\sigma$ is the softmax function. Unfortunately, a malicious user (adversary) exists who aim to fool the whole system, including other users’ models, by conducting adversarial attacks. Let the malicious user’s model copy to be $f_{att}$ (the compromised model copy). As he does not have access to query other users’ models, he then chooses to perform adversarial attacks on his copy $f_{att}$ to generate an adversarial example $x_{adv}$. Because all model copies are trained with the same dataset for the same classification task, the generated adversarial example could successfully lead to the misclassification of other users’ models. Our task is to find the compromised model copy $f_{att}$ from the pool. Figure 1: The proposed framework. The first part shows how we train the baseline model and then fine-tune the baseline model to $n$ different copies by implicit watermarking. The second part shows how the adversary is identified given only adversarial example. 3.2 IMPLICIT WATERMARKING To identify $g_{att}$ from $n$ model copies given $x_{adv}$, each copy distributed to different users needs to be embedded a unique watermark for subsequently being used for forensic investigation. At the same time, since the adversary has full access to the model, we cannot do any explicit modifications that can be easily bypassed by the adversary. For example, masked based watermarking scheme proposed in Cheng et al. (2023) could be removed by adaptively adding noise on the masked region during the inference. Therefore, it requires us to design a robust implicit watermarking scheme that can conceal the copies information into model parameters without hurting performance. In this section, we propose a simple yet effective method to insert the implicit watermark. Specifically, we aim to let pixels in a specific region to be preferentially perturbed in the adversarial examples so that those regions could be regarded as a strong signal for the identification. Therefore, different adversarial examples generated by different users would have a significant difference that could be used later into tracing the compromised model. To build such a preference, we first sample a range of coordinates \( w_i \) and a label set from label space \( y_i \subset \{1, 2, \ldots, K\} \) that acts as the model \( i \)'s fingerprint. To make these fingerprint coordinates to be inserted into the model copy as an implicit watermark, for every model copy \( g_i \), we create the fingerprint dataset \( \tilde{D}_i = \{(\tilde{x}_j, \tilde{y}_j)\}_{j=1}^{|\tilde{D}|} \) by sub-sampling several pixels \( t_i \) from the whole input space \( w_i \) together with a class \( \tilde{y}_j \) sampled from \( y_i \) as the label. More formally, let \( x \in \mathbb{R}^{H \times W \times C} \) denote any normal sample where \( H, W, C \) are height, width, and channels respectively. For copy \( g_i \), we create the fingerprint sample \( \tilde{x} \) by using the following blended function: \[ \tilde{x} = (1 - m_i) \odot x + m_i \odot t_i \] where \( \odot \) is element-wise product, and \( m_i \in \{0, \alpha\}^{H \times W} \) denotes the mask corresponding to \( t_i \), in which only randomly sampled pixel positions have value \( \alpha \) and \( \alpha \) is the blended ratio. We also set the corresponding label \( \tilde{x} \) to be a random class \( \tilde{y}_j \) from \( y_i \) to make the prioritized region active. After achieving the fingerprint datapoint, as shown in Figure 1, to make the framework efficient and scalable, we first train a base model and every model copy is then fine-tuned on the its own fingerprint dataset that contains both set of clean samples \( D = \{(x_j, y_j)\}_{j=1}^{|\tilde{D}|} \) and fingerprint samples \( \tilde{D} = \{(\tilde{x}_j, \tilde{y}_j)\}_{j=1}^{|\tilde{D}|} \). At the same time, we add a regularization term during finetuning to strengthen model’s memorization on the fingerprint datapoint. Specifically, for a fixed portion of clean data (30% in all experiments in this paper), we add random noise to the regions that are not being masked where \( m_{a,b,c} = 1 \). Then we use Eqn 1 to inject fingerprint into the noise image without changing the original true label. ### 3.3 Adversary Identification To identify the adversary \( g_{att} \) with only one adversarial example \( x_{adv} \), we propose two simple metrics. For the given adversarial example \( x_{adv} \), we first apply every copy’s sampled pixels \( t_i \) and corresponding mask \( m_i \) to create a set of fingerprint adversarial examples \( \tilde{x}_{adv}^i = A(x_{adv}, m_i, t_i) \). Specially, let \( \tilde{x}_{adv}^{att} \) be the fingerprint image corresponding to \( g_{att} \). #### KL metric We start the case when the model predicts \( x_{adv} \) with high confidence on the fingerprint class \( \tilde{y}_j \). In other words, if the \( x_{adv} \)'s prediction is \( \tilde{y}_j \) with a high confidence, the generated adversarial perturbation would be very similar with the sampled pixels \( t_i \). It inspires us to compare the output distribution between adversarial example with and without applying \( t_i \). If the \( x_{adv} \) is from the adversary copy \( g_{att} \), the output distribution of the adversarial example \( \sigma(g_{att}(x_{adv})) \) would be very similar with the one applied with the sampled pixels. On the other hand, if the \( x_{adv} \) is from other model copies instead of \( g_{att} \), the output distribution will shift greatly after applying sampled pixels. Hence we can compute the similarity between \( \sigma(g_i(x_{adv})) \) and \( \sigma(g_i(\tilde{x}_{adv}^i)) \) for all model copies \( \{g_i\}_{i=1}^{K} \) to identify the adversary through largest similarity. To measure the similarity between two probability distributions, we choose to compute commonly used KL divergence as the first metric called KL metric. Formally, for every model copy \( g_i \), we compute the KL metric \( kl_i \) between the output probabilities \( \sigma(g_i(x_{adv})) \) and the output probabilities \( \sigma(g_i(\tilde{x}_{adv}^i)) \), \[ kl_i = KL (\sigma(g_i(x_{adv})) || \sigma(g_i(\tilde{x}_{adv}^i))) \] \[ = \sum_{j=1}^{K} (\sigma(g_i(x_{adv})))_j \log \left( \frac{(\sigma(g_i(x_{adv})))_j}{(\sigma(g_i(\tilde{x}_{adv}^i)))_j} \right) \] where \( (\sigma(g_i(x_{adv})))_j, (\sigma(g_i(\tilde{x}_{adv}^i)))_j \) are the output probabilities of copy \( g_i \) on class \( j \) given \( x_{adv} \) and \( \tilde{x}_{adv}^i \), respectively. Since we sample different pixels corresponding to different random classes \( \tilde{y}_j \) for each copy \( g_i \), the KL metric for each combination is computed by Eqn 2 in the same way. The smaller one is used as the final KL metric of copy \( g_i \), denoted as \( kl_i^* \). With the final KL metric, the model copy corresponding to the smallest KL metric (the largest similarity) is the compromised model copy \( g_{att} \). #### Ratio metric However, since the adversary is conducting untargeted attack, there is a chance that the adversarial example would mislead the classifier into other classes than class \( \tilde{y}_j \), i.e the model has low confidence on predicting \( x_{adv} \) to class \( \tilde{y}_j \). Luckily, we observed that there would be a significant change on the model prediction distribution after applying \( t_i \) for the model that \( x_{adv} \) is based. Inspired by this observation, for every model copies \( g_i \), we measure the change of difference between maximum output probability and probability corresponding to the true class \( y \) of original image used to generate \( x_{adv} \). Based on this intuition, for each model copy \( g_i \), we compute its ratio metric as \[ r_i = \frac{\max_j (\sigma(g_i(\hat{x}_{adv})))_j - (\sigma(g_i(\hat{x}_{adv})))_y}{\max_j (\sigma(g_i(x_{adv})))_j - (\sigma(g_i(x_{adv})))_y}, \] where \( (\cdot)_y \) means the output probability of \( y \). With the two metrics, we can then combine them together to take both cases from low confidence to high confidence into consideration. In the following, we provide a method to linearly combine those two metrics together for the final identification. To better control the weight on two metrics, since the scales of the two metrics are different, we first normalize all \( kl_i^* \) and \( r_i^* \) of \( n \) copies into \([0, 1]\). After the normalization, we further use every model’s confidence to linearly combine the two metric values since the metrics are designed based on confidence level. Given \( x_{adv} \), for model copy \( g_i \), we use the difference between the top two output logits of \( g_i \) as the confidence level of \( g_i \) on \( x_{adv} \), i.e., the confidence level is \[ l_i = [g_i(x_{adv})]_{y_i} - \max_{j \neq y_i} [g_i(x_{adv})]_j, \] where \([g_i(x_{adv})]_j\) is the output logit of copy \( g_i \) on class \( j \) given \( x_{adv} \), and \( y_i \) is the predicted label of copy \( g_i \) given \( x_{adv} \). Then the combined metric value of copy \( g_i \) is computed as \[ v_i = w \cdot kl_i^* + (1 - w) \cdot r_i^*, \] where \( w = \text{sigmoid}(\max l_i - T) \) is the weight for the metrics and \( T \) is a pre-defined threshold to control the confidence level. For every model copy, we will calculate the final score \( v_i \) and take the copy with the smallest score as the compromised copy. That is, \[ \text{att} \leftarrow \arg\min_i v_i. \] ### 4 EXPERIMENTS #### 4.1 IMPLEMENTATION DETAILS Following the settings in Cheng et al. (2023), we conduct experiments on two widely used datasets, CIFAR10 (Krizhevsky et al., 2009) and GTSRB (Stallkamp et al., 2012). Two model architectures, ResNet18 (He et al., 2016) and VGG16 (Simonyan & Zisserman, 2014), are utilized to verify the effectiveness of the proposed method. Firstly, we pre-train models with cross-entropy loss using Adam optimizer (Kingma & Ba, 2014) for 50 epochs with learning rate 0.001 and batch size 128. After finishing pre-training models, for each copy, the constructed fingerprint dataset (described in Section 3.2) with ratio \( p \) of fingerprint samples is used to finetune the baseline model for 20 epochs. Both the ratio \( p \) of fingerprint samples and the blended ratio \( \alpha \) are 0.3 for all our experiments. We sample a label set of length 2 (i.e., \( |y_i| = 2 \)) for each copy. In this paper, we consider the cases that the number of distributed model copies is 50 and 100, where we finetune 50 or 100 model copies and identify one adversary from the 50 or 100 copies. We use 0.9% of total image size to apply \( t_i \). Hence for both CIFAR10 and GTSRB (\( 32 \times 32 \times 3 \) images), we randomly sample 9 positions for each combination of \( t_i \) and \( \tilde{y}_j \) of each model copy. For adversarial attacks, we firstly show the effectiveness of the proposed framework on several state-of-the-art white-box attacks. Then we also test the identification accuracy on different black-box attacks and show that the method can still achieve high accuracy on black-box attacks. Specifically, we use the following commonly used white-box and black-box attacks: - **PGD-\( \ell_2 \)** (White-box): Projected Gradient Descent attack with \( \ell_2 \) norm (Madry et al., 2017). The adversarial perturbations are constrained with \( \epsilon = 0.3 \). • **C&W** (White-box): one of the most popular methods in the white-box setting with $\ell_2$ norm proposed in Carlini & Wagner (Carlini & Wagner, 2017) and we set the $\kappa = 30$. • **PGD-$\ell_\infty$** (White-box): Projected Gradient Descent attack with $\ell_\infty$ norm. The adversarial perturbations are constrained with $\epsilon = 8/255$. • **APGD-CE** (White-box): Auto-Projected Gradient Descent attack with $\ell_\infty$ norm in AutoAttack (Croce & Hein, 2020) using adaptive stepsize adjustment. Cross-entropy loss is used and the adversarial perturbations are constrained with $\epsilon = 8/255$. • **NES** (Black-box): Black-box soft-label attack that uses derivative-free optimization to estimate the gradient (Ilyas et al., 2018). • **HSJA** (Black-box): Black-box hard-label attack that utilizes the zeroth order oracle to find a better random walk direction in generating adversarial examples (Chen et al., 2020). All adversarial attacks are conducted in untargeted manner. For adversarial examples generated by the above adversarial attacks, only valid adversarial examples that can transfer to other models are considered. For each model copy, 30 valid adversarial examples are generated. Hence there are about 1500 adversarial examples for 50 copies case, 3000 adversarial examples for 100 copies case. The identification accuracy is computed as the ratio between the number of correctly identified adversarial examples $N_c$ and the total number of adversarial examples $N_t$, i.e. TraceAcc = $\frac{N_c}{N_t} \cdot 100\%$. ### 4.2 Identification Results We first show that the proposed watermarking framework has limited effect on all model copies’ performance. For the two datasets and two model architectures, we can have four combinations, i.e., VGG16-CIFAR10, VGG16-GTSRB, ResNet18-CIFAR10, and ResNet18-GTSRB. We show the maximum, minimum, mean, and median of classification performance for each 50 or 100 case and compare them with the pre-trained model performance (baseline performance). From Table 1, the mean and median accuracy is similar to the baseline performance within around 1% difference. It shows the proposed framework would have limited degradation on the model’s clean performance. The identification accuracy with only one adversarial example is shown in Table 2. The threshold $T$ described in Section 3.3 is set as to be 7. We also conduct different choices of $T$ in the ablation study. For white-box attacks, the results show that the proposed method is very effective on different attacks, datasets, and model architectures, which achieves average accuracy of 74.11% and 71.22% for 50 copies case and 100 copies case, respectively. Specifically, on CIFAR10 dataset, the method can achieve the highest accuracy of 88.80% and 88.37% with only one adversarial example available for 50 copies case and 100 copies case. Although the focus of this paper is the white-box setting, we also evaluate the method on two popularly used black-box attacks, NES attack (Ilyas et al., 2018) and HSJA attack (Chen et al., 2020) which are also used in Cheng et al. (2023), as shown in Table 2. It can be observed that the method can still achieve effective identification, especially on NES attack. However, our identification result on the black-box attack is not as good as white-box attack tested because of the noise gradient estimation used in the black-box attack. Note that we don’t include the comparison on the masked-based watermarking method in Cheng et al. (2023). The reason is that the watermarking method (Cheng et al., 2023) is specifically designed for black-box attack identification which makes explicit modifications on the architectures and the white-box attacker could create strong adaptive attack to make the identification totally fail, which would easily make the identification rate to close to 0. **Results with more adversarial examples** Previously, we show the identification accuracy with only one adversarial example, which is the most difficult case. Our proposed framework could naturally be extended if there are more adversarial examples available. To combine more adversarial examples scores together, for each model copy, we firstly compute the final metric in Equation 4 for each adversarial example. Then we take the minimum metric value as the final metric of the copy on the set of adversarial examples. The model with the minimum final metric value among all copies is treated as the compromised one. We present the identification accuracy on CIFAR10 (Krizhevsky et al., 2009) dataset with architectures VGG16 (Simonyan & Zisserman, 2014) and ResNet18 (He et al., 2016) in the 50 copies case, as shown in Figure 2. From the results, it can be observed that more adversarial examples can largely facilitate the identification performance. For most cases, the Table 1: Clean classification accuracy(%) of watermarked model copies, compared to pre-trained baseline model performance. | Num | Model-Data | Baseline | Max | Min | Mean | Median | |-----|----------------|----------|-------|-------|-------|--------| | 50 | VGG-CIFAR10 | 90.21 | 90.22 | 87.45 | 89.30 | 89.34 | | | V16-G | 96.79 | 97.36 | 92.79 | 96.16 | 96.32 | | | R18-C | 92.03 | 92.04 | 90.29 | 91.19 | 91.21 | | | R18-G | 98.40 | 98.56 | 96.37 | 97.72 | 97.77 | | 100 | VGG-CIFAR10 | 90.21 | 90.1 | 85.31 | 89.16 | 89.28 | | | V16-G | 96.79 | 97.55 | 93.92 | 96.08 | 96.13 | | | R18-C | 92.03 | 91.95 | 89.62 | 91.22 | 91.22 | | | R18-G | 98.40 | 98.56 | 96.37 | 97.71 | 97.75 | Table 2: Identification accuracy(%) of the proposed framework in different cases with only one adversarial example. | Num | Model-Data | PGD-$\ell_2$ | C&W | PGD-$\ell_\infty$ | APGD-CE | NES | HSJA | |-----|------------|--------------|-----|-------------------|---------|-----|------| | 50 | V16-C | 68.98 | 80.89 | 85.56 | 88.48 | 83.00 | 47.91 | | | V16-G | 71.78 | 66.02 | 84.17 | 88.80 | 77.68 | 47.92 | | | R18-C | 63.10 | 63.97 | 66.74 | 72.84 | 73.45 | 49.51 | | | R18-G | 64.16 | 57.71 | 75.33 | 87.17 | 80.69 | 50.04 | | 100 | V16-C | 69.89 | 77.55 | 82.70 | 77.70 | 76.44 | 42.77 | | | V16-G | 71.90 | 66.19 | 81.75 | 88.37 | 71.59 | 39.26 | | | R18-C | 60.05 | 56.58 | 58.92 | 67.26 | 64.89 | 39.90 | | | R18-G | 62.52 | 57.23 | 75.74 | 85.22 | 77.88 | 40.58 | Accuracy can be improved up to about 90% with two adversarial examples, even up to near 100% with three or more adversarial examples. ![Graphs showing identification accuracy on more adversarial examples](image) Figure 2: Identification accuracy on more adversarial examples. PGD-L2 denotes PGD-$\ell_2$ attack; PGD-Linf denotes PGD-$\ell_\infty$ attack. 4.3 Robustness Against Adaptive Attack Since a unique watermark is inserted into each model copy in the proposed framework, a natural question arises: will the method be effective and robust if the adversary tries to conduct adaptive attack to remove the watermark? To answer the question, in this section, we show the effectiveness and robustness of the framework against adaptive watermark-removing attacks. Specifically, because our implicit watermark tries to build a direct mapping from several pixels and labels, the backdoor defense methods could be used to erase our proposed watermark. We then test the robustness of the proposed framework against different types of adaptive attacks including finetuning-based removal methods (Liu et al., 2021), and reverse-engineering based removal methods (Wang et al., 2019; Aiken et al., 2021). For finetuning-based removal methods, we re-implement the called ‘WILD’ framework in Liu et al. (2021) according to the paper since we didn’t find any open-source code in that paper. We follow the same settings using 20% of training data for finetuning the watermarked model. The Jensen Shannon divergence is used for the distribution metric and the loss weight for this term is 10, as used in the paper. We use the 50 VGG16 models trained on CIFAR10 to test the effectiveness of watermark removal. Initially, we find the backdoor removal method could remove our watermark in 90% cases. However, we empirically found that if we use data augmentation methods such as Random Erasing (Zhong et al., 2020) during watermarking, the watermarked model would be much more robust against removal. Note that we did not use any distribution loss which is very important for backdoor removal in Liu et al. (2021) to insert a watermark specifically against the removal method in Liu et al. (2021). We just use the commonly used data augmentation methods during watermarking. With data augmentations during watermarking, we could make our implicit water intact with only about 10% cases would be removed. Then we also test the robustness against reverse-engineering based backdoor methods (Wang et al., 2019; Aiken et al., 2021). For these Neural Cleanse based methods, the removal performance highly relies on the detection of the watermark. If Neural Cleanse cannot detect any watermark, no further steps would be proceed. Hence we mainly test if the Neural Cleanse can effectively detect our implicit watermark. However, we found Neural Cleanse can no longer detect any watermarks if we simply increase the number of fingerprint class $|y_i| = 4$. At the same time, the number of fingerprint class $|y_i|$ has limited effect on the identification accuracy and it can even further improve identification accuracy, as we show in the following. The clean accuracy with $|y_i| = 4$ is shown in Table 3, which shows a larger size of $|y_i|$ won’t affect clean accuracy due to the high capacity of neural networks. Table 3: Clean accuracy with $|y_i| = 4$. | Baseline | Max | Min | Mean | Median | |----------|-------|-------|-------|--------| | 90.21% | 90.09%| 86.54%| 89.10%| 89.19% | Then for each attack, we generate around 1500 adversarial examples using the 50 models. The identification accuracy given only one adversarial example on different adversarial attacks is shown in Table 4. Table 4: Identification accuracy with $|y_i| = 4$. | PGD-$\ell_2$ | PGD-$\ell_\infty$ | APGD-CE | C&W | NES | |--------------|-------------------|---------|-----|-----| | 75.75% | 60.42% | 77.50% | 69.37% | 72.26% | To summarize, we show that with only small and reasonable modifications, watermarked models are robust against different types of adaptive attacks, verifying the effectiveness and robustness of our proposed framework. ### 4.4 Ablation Study **Effect of different choices on $T$.** To show the effects of different choices of the threshold $T$, we present the identification results under different $T$ in this section, as shown in Table 5. We use VGG16-CIFAR10 with 50 copies to test the effect of different $T$. We select $T = 5, 10, 15$. It can be observed that with larger $T$, the identification accuracy of PGD-$\ell_2$ (Madry et al., 2017), PGD-$\ell_\infty$ (Madry et al., 2017), C&W (Carlini & Wagner, 2017), and APGD-CE (Croce & Hein, 2020) attacks decreases, while the accuracy of HSJA (Chen et al., 2020), and NES (Ilyas et al., 2018) attacks increases. According to the analysis in Section 3.3, this indicates that the adversarial examples generated by PGD-$\ell_2$, PGD-$\ell_\infty$, C&W, and APGD-CE attacks have larger confidence compared to the adversarial examples generated by the HSJA and NES attacks. Another observation from the results is that compared to other attacks, APGD-CE and HSJA are more stable to the change of threshold $T$. The difference between $T = 5$ and $T = 15$ is about 4% for APGD-CE and HSJA, while the difference for other attacks is up to 10%. The reason may be that for APGD-CE it uses adaptive stepsize adjustment instead of fixed stepsize to generate... perturbations which may be more stable. And for HSJA the computed confidence may be very small since it searches adversarial examples near boundary (Chen et al., 2020). Hence various values of $T$ don’t have much effect on the combined final metric value. In practice, to obtain better identification results, the investigator can firstly compute the confidence level as described in Section 3.2. Based on the confidence level, the investigator can determine whether the confidence value is large or small to choose the threshold $T$. Table 5: Identification accuracy(%) with different choices of the threshold $T$. | $T$ | PGD-$\ell_2$ | C&W | PGD-$\ell_\infty$ | APGD-CE | NES | HSJA | |-------|--------------|-----|-------------------|---------|-----|------| | $T = 5$ | 71.32 | 83.30 | 87.14 | 88.77 | 75.93 | 45.82 | | $T = 10$ | 65.39 | 76.60 | 81.65 | 87.29 | 84.39 | 48.63 | | $T = 15$ | 60.80 | 67.43 | 73.46 | 84.83 | 84.63 | 48.71 | Effect of watermark design. As mentioned in Section 3.2, the watermark are inserted in a discrete manner. In this section, we show that the discrete watermarks can indeed largely improve the identification accuracy. Specifically, we finetune 50 VGG16 model copies on CIFAR10 with square watermarks. All the finetuning process and generation of adversarial examples are the same as the discrete watermark except that the watermarks are inserted as $3 \times 3 \times 3$ square in continuous regions. We set the threshold $T = 7$ for the fair comparison. Firstly, we compare the clean classification accuracy under different watermark insertions. The results shown in Table 6a indicate the effects of different watermark insertion manners on clean classification accuracy are subtle. We show the results for identification accuracy of adversarial examples with only one adversarial example in Table 6b. From the results, we can see the discrete watermark performs much better compared to the square one, especially for PGD-$\ell_2$, PGD-$\ell_\infty$, APGD-CE, and C&W attacks. We defer more ablation studies in the Appendix. Table 6: Clean classification accuracy(%) and identification accuracy(%) of for different types of watermark $w_i$ selection. ‘Discrete’ means the watermark pixels are selected in discrete positions; ‘Square’ means the watermark pixels are selected as a square in continuous regions. (a) Clean classification accuracy(%). | Watermark type | Baseline | Max | Min | Mean | Median | |----------------|----------|-----|-----|------|--------| | Discrete | 90.21 | 90.22 | 87.45 | 89.30 | 89.34 | | Square | 90.21 | 90.45 | 88.06 | 89.44 | 89.55 | (b) Identification accuracy(%) with only one adversarial example. | Watermark type | PGD-$\ell_2$ | C&W | PGD-$\ell_\infty$ | APGD-CE | NES | HSJA | |----------------|--------------|-----|-------------------|---------|-----|------| | Discrete | **68.98** | **80.89** | **85.56** | **88.48** | **83.00** | **47.91** | | Square | 26.65 | 59.78 | 36.49 | 40.95 | 80.86 | 44.47 | 5 CONCLUSION AND LIMITATIONS In this paper, we propose a novel framework for the identification of adversary with only one adversarial example under white-box attacks. We design an implicit watermarking method by designing fingerprint datasets to make each model copy unique and propose two different metrics to identify the adversary with high accuracy in data-free case. Extensive experiments on various attacks including both white-box and black-box attacks, datasets, and model architectures verify the effectiveness of the proposed method. With two more adversarial examples available, the tracing accuracy can be further improved up to near 100%. However, although the proposed framework shows promising high adversary identification accuracy, it couldn’t handle the cases where there exists several adversary to jointly conduct adversarial attack. Also, the proposed framework couldn’t be directly applied into other machine learning tasks except for image classification, which will leave in our future work. REFERENCES William Aiken, Hyoungshick Kim, Simon Woo, and Jungwoo Ryoo. Neural network laundering: Removing black-box backdoor watermarks from deep neural networks. *Computers & Security*, 106:102277, 2021. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In *2017 IEEE Symposium on Security and Privacy (SP)*, pp. 39–57. IEEE, 2017. Jianbo Chen, Michael I Jordan, and Martin J Wainwright. Hopskipjumpattack: A query-efficient decision-based attack. In *2020 IEEE Symposium on Security and Privacy (SP)*, pp. 1277–1294. IEEE, 2020. Minhao Cheng, Simranjit Singh, Patrick Chen, Pin-Yu Chen, Sijia Liu, and Cho-Jui Hsieh. Sign-opt: A query-efficient hard-label adversarial attack. *arXiv preprint arXiv:1909.10773*, 2019. Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, and Cho-Jui Hsieh. Cat: Customized adversarial training for improved robustness. *arXiv preprint arXiv:2002.06789*, 2020. Minhao Cheng, Rui Min, Haochen Sun, and Pin-Yu Chen. Identification of the adversary from a single adversarial example. 2023. Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In *International Conference on Machine Learning*, pp. 2206–2216. PMLR, 2020. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. *arXiv preprint arXiv:1412.6572*, 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 770–778, 2016. Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. In *International Conference on Machine Learning*, pp. 2137–2146. PMLR, 2018. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Xuankai Liu, Fengting Li, Bihan Wen, and Qi Li. Removing backdoor-based watermarks in neural networks with limited data. In *2020 25th International Conference on Pattern Recognition (ICPR)*, pp. 10149–10156. IEEE, 2021. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*, 2017. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. *arXiv preprint arXiv:1409.1556*, 2014. Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. *Neural networks*, 32:323–332, 2012. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*, 2013. Sunil Thulasidasan, Gopinath Chennupati, Jeff A Bilmes, Tanmoy Bhattacharya, and Sarah Michalak. On mixup training: Improved calibration and predictive uncertainty for deep neural networks. *Advances in Neural Information Processing Systems*, 32, 2019.
UH4HinPK9d
Even if we settle for finite-dimensional spaces for the initial condition, the parameters, the inputs, etc. (i.e., all the places where one can change the dynamics), the overall dimension can be large. Indeed, $C_{f,I}$ is the range of the forward problem, containing all possible trajectories. It is not possible to characterize the space analytically, so it will have to be done through the finite number of parameters mentioned above. In that sense, (13) becomes the standard parameterized minimization problem, like those we see in regressions. If we know a priori that the dynamics are only subject to a finite number of parameters, no one will use (12), which does not utilize this prior/expert knowledge.
Provably Accurate ODE Forecasting Through Explicit Trajectory Optimization Anonymous authors Paper under double-blind review Abstract This work introduces a method to enable accurate forecasting of time series governed by ordinary differential equations (ODE) through the usage of cost functions explicitly dependent on the future trajectory rather than the past measurement times. We prove that the space of solutions of an $N$-dimensional, smooth, Lipschitz ODE on any given finite time horizon is an $N$-dimensional Riemannian manifold embedded in the space of square integrable continuous functions. This finite dimensional manifold structure enables the application of common statistical objectives such as maximum likelihood (ML), maximum a posteriori (MAP), and minimum mean squared error (MMSE) estimation directly in the space of feasible ODE solutions. The restriction to feasible trajectories of the system limits known issues such as oversmoothing seen in unconstrained MMSE forecasting. We demonstrate that direct optimization of trajectories reduces error in forecasting when compared to estimating initial conditions or minimizing empirical error. Beyond theoretical justifications, we provide Monte Carlo simulations evaluating the performance of the optimal solutions of six different objective functions: ML, MAP state estimation, MMSE state estimation, MAP trajectory estimation, MMSE trajectory estimation over all square integrable functions, and MMSE trajectory estimation over solutions of the differential equation. 1 Introduction Decision making often hinges on accurate forecasts. Due to the simplicity of finite-dimensional spaces, many problems in state estimation and forecasting are formulated in a pointwise manner over time. Despite this computational simplification, feasible trajectories must solve consistency constraints, and additionally pointwise estimation may be unnecessary because the states at different points in time are often highly correlated. Furthermore, the structure of the trajectory itself often contains meaningful information beyond that of the value at a particular time horizon. Time series forecast consistency is often enforced after the fact, where predictions for distinct time horizons are constructed and then projected to enforce the hierarchical constraint (Rangapuram et al., 2021, 2023). In this work, we forecast ordinary differential equations (ODEs) over an entire chosen time horizon by formulating the time series forecasting problem as a finite-dimensional point estimation problem on a Riemannian manifold. We prove that the space of feasible trajectories on any finite time-interval of a smooth Lipschitz dynamical system is itself an $N$-dimensional Riemannian submanifold of the space of continuous bounded functions, where $N$ is the dimensionality of the state. We argue that neural ODEs (Chen et al., 2018) and related differential equation modeling methods (Dupont et al., 2019; Greydanus et al., 2019; Massaroli et al., 2020; Finlay et al., 2020; Bilos et al., 2021; Holt et al., 2022) fundamentally solve a problem of point estimation in the manifold largely without consideration of the statistical distinction between parameter estimation and trajectory estimation. We then use this observation to cast the trajectory estimation problem into a classical statistical framework, enabling direct optimization of statistical objectives based on forecasting. Furthermore, we provide tractable computational methods to optimize these objectives. The rest of the manuscript is organized as follows. In Section 2, we introduce the model formulation, assumptions, and key spaces in this work. In Section 3, we prove the existence of the finite-dimensional Riemannian trajectory manifold. Then, in Section 4, we describe the tools required for statistical estimation on the trajectory manifold based on noisy measurements, as well as a description of the implications for commonly used statistical objectives. In Section 5 we provide computational tools for optimizing these common statistical objectives directly on the manifold of valid trajectories. Finally, in Section 6 we provide numerical simulations that demonstrate that the proposed explicit trajectory optimization outperforms the standard data fitting objective. 1.1 Related Work Differential Equations and Deep Learning The connection between deep learning and differential equations can be broadly partitioned into two major categories. In physics-informed machine learning, neural networks are used to approximately solve a known differential equation subject to noisy observations. On the other side, there has been significant interest in using neural networks to learn a representation of the underlying differential equation. While using neural networks as solutions to differential equations dates back to at least the 1990’s (Lagaris et al., 1998), it has had a significant resurgence in recent years (Raissi & Karniadakis, 2018; Raissi et al., 2019). Commonly, these techniques use some form of empirical risk minimization, an objective which is closely related to maximum likelihood estimation in classical estimation and represents a notion of best fit to the observed data. In these techniques, rather than use the differential equation as a hard constraint, it is used as a regularization for regression. While this regularization approach has a number of desirable characteristics — simple optimization and an automatic tolerance for input perturbations to the system — it can result in less readily interpretable behavior. Despite these limitations, the regularization suggests the ability to use the high level of structure in the space of differential equation solutions for inference. From a different direction, numerous techniques around continuous-time machine learning such as Neural Ordinary Differential Equations (ODEs) have shown promise in time-series forecasting when the dynamics are unknown (Chen et al., 2018). In such techniques, a neural network is used to learn an approximation of the underlying differential equation. It was quickly noted that ODE solutions have fundamental topological constraints, and so augmented Neural ODEs were proposed (Dupont et al., 2019). Further analysis of the underlying behavior of ODE-defined models led to the construction of different time-varying versions of the model, as well as the usage of data-dependent vector fields (Massaroli et al., 2020). While there have been numerous application-focused papers using neural ODEs (Chen et al., 2022), many of the additional advancements have been in methods of training the models (Finlay et al., 2020). Alternative perspectives have been explored in searching for solutions in the Laplace domain (Holt et al., 2022). A subtle change in approach was used in Neural Flows, which in principle operate in the space of trajectories instead of the dynamics, but does so through a restriction of a set of functions satisfying some necessary conditions of flows (Bilos et al., 2021). Time Series Forecasting Regularization As regularization is an essential part of ensuring models are generalizable, we include a brief summary of key time series forecasting regularization methods. Deep learning models have many universal approaches to regularization which are independent of the context. Some common approaches include dropout (Srivastava et al., 2014), dropconnect (Wan et al., 2013), batch normalization (Ioffe & Szegedy, 2015), complexity regularization (Barron, 1991), L0 regularization (Louizos et al., 2018), and classical regularizers such as Tikhonov regularization and LASSO. Beyond general techniques, time series forecasting necessitates specialized techniques due in part to the lack of independent samples. There exist numerous specialized regularization methods for recurrent neural networks (Zaremba et al., 2014; Krueger & Memisevic, 2016; Wang & Niepert, 2019; Krueger et al., 2017). Neural ODEs introduce additional complexities in the training process, and have thus spawned a number of specific regularization techniques. These include random integration times (Ghosh et al., 2020), penalties based on the Jacobian of the vector field and optimal transport (Finlay et al., 2020), or even regularization based on the ODE solvers themselves (Pal et al., 2021). Interestingly, none of these techniques so far make explicit use of the distinct structure and constraints of forecasting problems. In autoregressive models, covariance matrix based regularization has been proposed (Bickel & Gel, 2011). There have been some approaches to the problem using matrix factorization for time series forecasting (Yu et al., 2016; Chen & Sun, 2022). In more general frameworks, temporal attention based methods to guide learning for different time-horizons can be applied (Pan et al., 2019). Dependence on different prediction horizons also serves to implicitly regularize over the observed intervals (Challu et al., 2022). Hierarchical time series models use predictions at different resolutions and enforce consistency through projections (Rangapuram et al., 2021, 2023). The shadowing lemma (Pilyurin, 1959) has been used to justify estimates of long-term invariants of systems using numerical solvers (Wang et al., 2014; Lasagna et al., 2019). Finally, in systems governed by linear ODEs, it has been observed that the solution space forms a finite-dimensional linear subspace, an observation which can be used to efficiently estimate best fit trajectories and introduce regularization terms based on the Green’s functions of the system (González et al., 2014; Mutny & Krause, 2022). 1.2 Contributions • We propose a principled method for forecasting different time-horizons which extend beyond the duration of the observed data and do not require separate multi-horizon optimization. • We prove that the space of trajectories of an ODE $\dot{x} = f(x)$ on any compact interval $I \subset \mathbb{R}$ is a finite-dimensional Riemannian manifold embedded in the space of square integrable functions if $f$ is Lipschitz and continuously differentiable. • We characterize the transformation from initial conditions and parameters to the trajectory manifold given any Lipschitz and continuously differentiable ODE, thereby enabling optimization on the manifold of feasible ODE trajectories. • We analyze implications for standard estimation approaches such as maximum likelihood (ML), maximum a posteriori (MAP), and minimum mean squared error (MMSE) estimation, thus enabling the inheritance of their respective statistical guarantees to forecasting. 2 Problem Formulation We assume that the underlying data comes from the state space model $$\dot{x}_t = f(x_t, u_t, \theta, t) \quad (1)$$ $$y_i \sim P_{obs}(x_{\tau_i}, \theta) \quad (2)$$ $$u \sim P_{input} \quad (3)$$ where $x_t \in X \subset \mathbb{R}^N$ is the state of the system at time $t$, $\theta \in \Theta \subset \mathbb{R}^M$ is an unknown set of parameters for the model, $t \in I$ represents time on some finite-interval $I$, $u \in U \subset C^1(I, || \cdot ||_\infty)$ is a continuously differentiable external input, $y_i$ represents the observation at time $\tau_i$, $P_{obs}$ is the observation distribution parameterized by the current state and system parameters, and $P_{input}$ is the distribution of system inputs. We let $x$ and $u$ represent the entire trajectory and input respectively. In general, the forecasting interval $I$ extends significantly beyond the final measurement time $\tau_i$. The goal in this work is to enable the use of powerful statistical estimation methods such as maximum likelihood (ML), maximum a posteriori (MAP), and minimum mean squared error (MMSE) estimators to jointly estimate $x$ over the entire forecasting interval $I$. While these estimators each come with numerous theoretical guarantees, they require the space on which they operate to be well-behaved. We require two assumptions to provide our guarantees of such a structure in this manuscript. First, the assumption of Lipschitz continuity of the vector field $f$ allows the invocation of the existence and uniqueness theorem, while smoothness implies a smooth dependence on initial conditions and parameters (Khalil, 2002). **Assumption 1** (Existence, Uniqueness, and Smoothness of Trajectories). The vector field $f$ is Lipschitz continuous with a continuously differentiable derivative in the forecasting horizon. Second, we restrict the space of inputs, initial conditions, and parameterization to be finite dimensional. This enables the usage of tools from differential geometry to transport quantities between manifolds. **Assumption 2** (Finite-Dimensional Spaces). The space of possible inputs, $U$, the state space, $X$, and the parameter space, $\Theta$, are a finite-dimensional smooth manifolds with or without boundary. Under these assumptions, the main contribution of this work is to prove that there exists a smooth isomorphism $\psi : X \times U \times \Theta \rightarrow C_{f,I}$, where $C_{f,I} := \{ x : \dot{x}_t = f(x_t, u_t, \theta, t) \}$ is the space of feasible solutions of (1). 3 Trajectory Manifold The main contribution of this work is the characterization of the finite-dimensional manifold of trajectories of the system in Equation (1), or \( C_{f,I} \). In this section, we introduce a theorem which enables the application of common point estimation techniques to the forecasting problem. In particular, we show that \( C_{f,I} \) is a Riemannian manifold and that \( \psi \) represents a smooth transformation onto \( C_{f,I} \). As \( \psi \) and its directional derivatives can be readily computed numerically using ODE solvers, this characterization is sufficient for statistical estimation on \( C_{f,I} \). While the full proof is available in Appendix B, we include an outline of the proof here. **Theorem 1** (Isomorphism Between State Space and Trajectory Space). Under Assumption 7 and Assumption 2, the space of trajectories \( C_{f,I} \) is a finite-dimensional Riemannian manifold. Furthermore, the transformation \( \psi \) defined such that \[ \psi(x_0, u, \theta)(t) = x_0 + \int_0^t f(x_\tau, u_\tau, \theta, \tau) d\tau \] for all \( t \in I \) is a smooth isomorphism between \( X \times U \times \Theta \) and \( C_{f,I} \). **Proof.** Complete proof in Appendix B The proof is completed in three parts, each providing an additional level of structure to \( C_{f,I} \). In each step, we use properties of the flow of the system, or the semigroup of functions \( \varphi^\tau : x_t \mapsto x_{t+\tau} \) which advance time. We begin by showing that \( \psi \) is an injective function into the space of continuous bounded functions on the interval \( I \), or \( C(I, \| \cdot \|_\infty) \). We then show \( \psi \) and \( \psi^{-1} \) are continuous to demonstrate the topological manifold structure. Second, we prove that \( C_{f,I} \) is a smooth manifold by showing \( \psi \) and \( \psi^{-1} \) are continuously differentiable and full rank. Finally, by recalling that \( L^p \) spaces on compact subsets of the real line are nested, we inherit the Riemannian metric from \( L^2 \). The main consequence of Theorem 1 is that it reduces problems in forecasting to one of propagating a probability distribution through a smooth function. For this reason, the distinctions between the initial conditions, parameters, and inputs are inconsequential, and so for notational clarity we consider \( \psi \) to be only a function of the initial condition \( x_0 \) for the remainder of this work. An important note is that the Riemannian metric in \( C_{f,I} \) need not be induced by the standard \( L^2 \) inner product. A natural extension is to select some symmetric, positive definite integral kernel \( K : I \times I \to \mathbb{R}^{N \times N} \) and define the inner product \[ \langle x, x' \rangle_K = \int_{I \times I} x_\tau^\top K(\tau, \tau') x_{\tau'} d\tau d\tau', \] such that \( \langle x, x \rangle_K > 0 \) for any \( x \neq 0 \). While the choice of an appropriate integral kernel \( K \) may be an interesting independent question, an immediate application is in weighting the importance of different time-horizons. That is, let \[ K(\tau, \tau') = \begin{cases} g(\tau) 1 & \tau = \tau' \\ 0 & \text{otherwise} \end{cases}, \] for some strictly positive \( g > 0 \) where \( 1 \in \mathbb{R}^{N \times N} \) is an identity matrix. We thus enable the ability to directly optimize for different forecasting objectives in a statistically rigorous manner with no major changes to the underlying prediction algorithm. 4 Point Estimation on Trajectory Manifolds In this section, we introduce the statistical tools associated with the trajectory manifold defined in Section 3. In particular, we identify the required fundamental modifications to ML, MAP, and MMSE estimation on the trajectory manifold. In doing so, we additionally specialize the formula for pushing densities along smooth maps between Riemannian manifolds to the transformation between the state-space and the trajectory space. The generalization of the change of variables formula for random variables to smooth transformations between smooth manifolds is well known. The action is known as a pullback of densities and is identical to the standard formula, but expressed in local coordinates (Lee [2013]). That is, given some continuous probability density \( p_0(x_0) \) over the initial conditions of the system, \( x = \psi(x_0) \) is distributed according to \[ p(x) = |\det D\psi|^{-1} p_0(\psi^{-1}(x)), \] where \( D\psi \) represents the matrix of partial derivatives in local coordinates of \( C_{f,I} \) and \( X \). See Appendix C for additional details. Local coordinates introduce an inherent difficulty in numerical computations, and so we include the following proposition as an alternative approach to computing \( |\det D\psi| \). **Proposition 1.** Let \( \{v_i\} \) form an orthonormal basis of the tangent space of \( X \) at \( x_0 \), and let \( D_{v_i}|_{x_0}\psi \) be the directional derivative of \( \psi \) in the direction \( v_i \) at \( x_0 \), the result of which is represented in the ambient space \( L^2(I) \). Finally, define \[ a_{i,j} := \langle D_{v_i}|_{x_0}\psi, D_{v_j}|_{x_0}\psi \rangle_K \quad \text{and} \quad A_x := \begin{bmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,N} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,N} \\ \vdots & \vdots & \ddots & \vdots \\ a_{N,1} & a_{N,2} & \cdots & a_{N,N} \end{bmatrix}, \] based on the inner products of the directional derivatives with respect to the basis. Then \[ |\det D\psi| = \sqrt{|\det A_x|}. \] **Proof.** Proof available in Appendix D. The complexity in this statement is in the correct interpretation of \( D\psi \). At first glance, \( D\psi \) does not appear to be square, and so the determinant would not be well defined. This potential concern is unfounded though, as \( D\psi \) is defined in terms of the tangent spaces of the manifolds, and thus can be represented as a square matrix. A detailed proof is included in Appendix D. We now connect the geometric properties to common estimation objectives. In particular, we provide methods for MAP estimation, ML estimation, and MMSE estimation constrained to \( C_{f,I} \) or on the ambient space. ### 4.1 ML Estimation We consider maximum likelihood estimation of the trajectory. Suppose we have some likelihood function of our observations parameterized by the state of the system, e.g. \( p(y_i|x_{\tau_i}) \). Then the likelihood of the entire set of observations can be expressed in terms of the initial condition as \( p(y|x) = \prod_i p(y_i|\phi^{\tau_i}(x_0)) \), and the maximum likelihood estimate of the initial condition is \( \arg\max_{x_0} p(y|x_0) \). ML estimation is known to commute with bijective transformations, and thus \[ \hat{x}_{ML} = \arg\max_{\hat{x} \in C_{f,I}} p(y|\psi^{-1}(\hat{x})) = \psi\left( \arg\max_{x_0 \in X} p(y|\hat{x}_0) \right). \] **Invariant to \( \psi \):** It is a well-known that ML estimation commutes with bijective reparameterizations (Trees [2001]). Thus, the ML trajectory in \( C_{f,I} \) is the result of applying \( \psi \) to the ML estimate state. **Invariant to \( K \):** The application of the integral kernel \( K \) can be viewed as a linear reparameterization. Thus, the ML estimate is invariant to \( K \). ### 4.2 MAP Estimation The behavior of MAP estimation is more subtle than that of ML estimation. While it only involves the addition of a prior to the initial condition, the shift in interpretation from frequentist statistics to Bayesian statistics requires valid probability distributions. Thus, MAP estimation loses invariance to reparameterization and requires the application of Proposition 1. The posterior distribution of the initial condition is \( p(x_0|y) \propto p(y|x_0)p(x_0) \). To complete MAP estimation on \( C_{f,I} \), we complete a pointwise multiplication of the posterior distribution of the current state with \( |\det A_x|^{-1} \), or \[ \hat{x}_{MAP} = \arg\max_{\hat{x} \in C_{f,I}} p(\hat{x}|y) = \arg\max_{\hat{x} \in C_{f,I}} p(\psi^{-1}(\hat{x})|y)p(\psi^{-1}(\hat{x}))|\det A_{\hat{x}}|^{-1}, \] where \( x_0 = \psi^{-1}(x) \) is the initial condition of the trajectory. Dependent on \( \psi \): MAP estimation is only invariant to linear reparameterizations (Trees, 2001). This can be seen directly through the dependence on \( A_x \) in Equation (11). Dependent on \( K \): While a linear transformation to a space ordinarily results in no change to MAP estimation, due to the geodesic curvature of the manifold, the linear transformation becomes nonlinear. This can be seen in Equation (11), which has a non-linear dependence on \( x \) through \( K \) as part of \( A_x \). 4.3 MMSE Estimation on the Ambient Space MMSE estimation is well-known to be the conditional expectation, or \[ \hat{x}_{\text{MMSE}} = \arg \min_{x \in L^2(I)} \mathbb{E} \left[ \| \hat{x} - x \|^2 \mid y \right] = \mathbb{E} \left[ \psi(x_0) \mid y \right], \] (12) where \( L^2(I) \) is the space of square integrable functions on \( I \). Dependent on \( \psi \): Conditional expectation does not in general commute with \( \psi \), and so MMSE estimation of the state is a different problem than MMSE estimation of the trajectory. Invariant to \( K \): As conditional expectation commutes with linear transformations, the MMSE estimate is invariant to the choice of \( K \). An implication is that the MMSE trajectory estimate is optimal for any desired weighting of time horizons by the construction of Equation (6). Additional details are provided in Appendix E. 4.4 MMSE Estimation on the Manifold We choose to consider the ambient distance rather than the intrinsic distance on the manifold as the former is both more physically meaningful and computationally tractable. By the orthogonality principle, the MSE of any other estimate \( \tilde{x} \) is the sum of the MSE of this estimate with the squared distance, or \( \mathbb{E} \left[ \| \tilde{x} - x \|^2 \mid y \right] = \mathbb{E} \left[ \| \tilde{x} - \hat{x}_{\text{MMSE}} \|^2 \mid y \right] + \mathbb{E} \left[ \| \hat{x}_{\text{MMSE}} - x \|^2 \mid y \right] \). Thus, the MMSE estimate on the manifold is the projection of the ambient MMSE estimate onto the manifold, or \[ \hat{x}_{\text{MMSE},C_f,I} = \arg \min_{x \in C_f,I} \mathbb{E} \left[ \| \hat{x} - x \|^2 \mid y \right] = \arg \min_{x \in C_f,I} \| \hat{x} - \hat{x}_{\text{MMSE}} \|^2. \] (13) Dependent on \( \psi \): Identical to the ambient case, the MMSE estimate is dependent on \( \psi \). Dependent on \( K \): \( K \) acts in a nonlinear manner on the space through the projection in Equation (13). 5 Computation of Estimates In this section, we describe methods for computing estimates on the trajectory manifold. The core idea is to pull the costs on the manifold into the state space along \( \psi \). ML Estimation ML estimation can be computed in exactly the approach proposed by neural ODEs (Chen et al., 2018). The derivative can be computed using adjoint sensitivity analysis, then standard first-order methods can be applied to best fit the trajectory to the observations. By letting \( D_\varphi^\tau_i \) denote the Jacobian of the flow, the gradient of the log-likelihood is computed in \( X \) as \[ \nabla_{x_0} \log p(y|x) = \sum_i \nabla_{x_0} \log p(y_i|\varphi^\tau_i(x_0)) = \sum_i [D_\varphi^\tau_i] \nabla \varphi(x_\tau)p(y_i|x_\tau). \] (14) MAP Estimation MAP estimation requires the computation of the reparameterization weighting term which depends on the first derivative of the ODE with respect to initial conditions. Thus, while the derivative exists in principle, it is significantly more expensive to compute numerically through ODE solvers due to dependence on the Hessian of the ODE solution. A full description of the computation of the pushforward weight is available in Appendix F, as well as a discussion of numerical tolerance selection. For this reason, we propose the usage of zero-order methods to approximate the derivative, or other derivative-free optimization methods such as simulated annealing. Note that this limitation makes MAP estimation significantly less practical than the other techniques as the dimensionality scales. MMSE Estimation — Ambient Space MMSE Estimation in the ambient space can be computed through a sampling approach. We can construct an approximation of the MMSE estimate as \[ \hat{x}_{\text{MMSE}} = \mathbb{E} [\psi(x) \mid y] \approx \frac{1}{S} \sum_{i=1}^{S} \psi(X_{0,i}), \] where \( \{X_{0,i}\} \) are a set of i.i.d. samples from the posterior distribution of the initial condition, or \( p(x_0 \mid y) \propto p(y \mid x_0)p(x_0) \). Often, sampling directly from the posterior is not practical. In such a case, observe that we can readily evaluate the posterior up to a multiplicative scalar. This is sufficient for importance sampling and numerous Markov chain Monte Carlo (MCMC) methods. MMSE Estimation — Trajectory Manifold MMSE estimation on the manifold can be completed in two steps through the orthogonality principle. First, construct the ambient MMSE estimate. The projection can be computed through gradient-based methods through a geometric pullback of the gradient into the statespace. That is \[ \nabla_{x_0} \| \dot{x} - \hat{x}_{\text{MMSE}} \|^2 = [D\psi](\dot{x} - \hat{x}_{\text{MMSE}}), \] where \( D\psi \) can be approximated through numerical differentiation through the ODE solver. 6 Numerical Experiments In this section, we include numerical simulations to elucidate the differences in behavior between the different estimation objectives discussed in this work. Throughout our simulations, we compared the performance of the optimal solutions on six different forecasting objectives: ML estimation, MAP estimation of the initial condition, MMSE estimation of the initial condition, MAP estimation of the trajectory, MMSE estimation of the trajectory in the ambient space, and MMSE estimation of the trajectory restricted to \( C_f,t \). These objectives are equally distributed between the classical two-step approach of estimating the system state before solving the ODE and direct optimization over the forecasting interval in order to best illustrate the differences in behavior of the solutions. The key operations in this work are available as a Python library which takes vector fields describing system dynamics as arguments, while the simulation code is included to fully reproduce all figures shown in this section.\(^1\) We implemented our techniques using Diffrax (Kidger, 2021), a library for working with differential equations and machine learning in Jax. Further information on software dependencies, simulation hardware, and simulation details are available in Appendix G. We simulated the Lotka–Volterra equation, \[ x_\tau = \begin{bmatrix} x_\tau^{(1)} \\ x_\tau^{(2)} \end{bmatrix}, \quad \dot{x}_\tau = \begin{bmatrix} \alpha x_\tau^{(1)} - \beta x_\tau^{(1)} x_\tau^{(2)} \\ \delta x_\tau^{(1)} x_\tau^{(2)} - \gamma x_\tau^{(2)} \end{bmatrix}, \] which represents a model of population dynamics between a predator and prey species known to involve oscillations dependent on the initial conditions. This system was chosen in part due to the rapid transitions in the time series, the positions of which are essential in predictions to limit error. We recorded measurements every 0.3 seconds on the time interval \([0, 3]\) of the form \[ y_i = x_{\tau_i} + \eta_i, \] where \( \tau_i \) is the time of the measurement and \( \eta_i \sim \mathcal{N}(0, 1\sigma^2_\eta) \) is additive i.i.d. Gaussian noise. We first include a set of experiments to illustrate the behaviors in the different objectives which may lead to poor performance in forecasting tasks. The results of these simulations are shown in Figure 1 and Figure 2. The first includes example trajectories to illustrate issues of oversmoothing and phase mismatch, while the second illustrates the objective function over the state space. Simulations using three additional systems are available in Appendix A.1 demonstrating similar behavior to that in Figure 1. Furthermore, simulations demonstrating the necessity of model knowledge to operate in this data-limited regime are included in Appendix A.2. In Figure 1, the blue and orange lines represent the two different state variables. The dashed lines represent the ground truth, while the solid lines represents the chosen estimate. Qualitatively, we \(^1\)Simulation code available at https://github.com/{author}/{repository} Figure 1: Forecasting Performance based on 6 different objective functions for the Lotka-Volterra equations. The dashed lines indicate the true trajectory of the system, while the solid lines indicate the estimated trajectory. Data collection stopped at the vertical red line. Observe that, despite being the best fit for the observations by construction, the ML and MAP state estimation select trajectories which rapidly lose synchronization with the periodic trajectory. While the MMSE state estimation does better, we see that the second peak is shifted even in this short time horizon. Meanwhile, all three trajectory estimation techniques appear to better match the phase of periodic structure due to the direct dependence in the cost. Finally, while the ambient MMSE trajectory suffers from the commonly seen over-smoothing of forecasts, the manifold constraint maintains the qualitative shape defined by the system at the cost of an increase in MSE. Figure 2: This figure contains examples of the forecasting objectives and pushforward weight in this work for one realization of the Lotka-Volterra system. The red star indicates the true initial condition to be estimated. Each panel contains one objective function, e.g., the likelihood function, the posterior density, or the mean squared error. Notably, the Trajectory MSE plot is the only method to capture the valley of trajectories similar to the true solution. In Figure 2, we illustrate the objective functions defined by the observations in these simulations, as well as the pushforward weight required to transform between the state space and the trajectory manifold. Observe that the pushforward weight shifts peaks towards regions which are less sensitive to the initial condition. Similarly, the trajectory MSE illustrates a valley of initializations which lead to similar trajectories along the interval, a structure not captured by any competing technique. Figure 3: Error as a function of noise power and time horizon for forecasting the Lotke-Volterra system. Left: Forecasting performance as a function of noise power in the observations; Right: Forecasting performance as a function of time horizon. 6.1 Quantitative Comparison In this section, we completed Monte Carlo simulations to compare the MSE, Mean Absolute Error (MAE), and expected sup norm of the error, or \( \mathbb{E}[\sup_t \| \hat{x}_t - x_t \|] \) in the trajectory for the proposed objectives as a function of noise power and time horizon. In these simulations, a uniform prior over an interval was chosen, resulting in an identical objective for ML estimation and MAP estimation of the initial condition. Results are shown in Figure 3, where the left panel varies \( \sigma^2_\eta \) with a constant 10 second time horizon, while the right panel varies the time horizon with a fixed \( \sigma^2_\eta = 1 \). The key feature in the results is that the maximum likelihood curve, which represents fitting the observed data, always performs the worst, and that this issue becomes even more prominent in longer time horizons and when the noise power is high. This demonstrates the requirement to critically consider the implications of the reparameterization on the time series forecasting problem, particularly when working with time horizons significantly longer than the observation interval. While the unconstrained MMSE trajectory performs significantly better in MSE than all competing methods, recall that it produces trajectories which do not resemble the original system. While preserving the system structure, MMSE estimation constrained to \( C_{f,T} \) still significantly outperforms the other competing methods, particularly in long time horizons. Furthermore, the performance of the proposed constrained MMSE estimation is often comparable to the performance of the unconstrained solution in MAE and expected sup norm. 7 Conclusion In this work, we introduced a method for provably accurate forecasting of time series governed by ODEs through the usage of objectives explicitly dependent on the future trajectory of the system. By proving that the space of finite-horizon trajectories of a continuously differentiable, Lipschitz dynamical system forms a Riemannian manifold, the problem can be described as one of point estimation in a finite-dimensional space. This realization enabled the application of ML, MAP, and MMSE estimation directly in the space of feasible ODE trajectories, where the objectives can be optimized computationally by transporting them into the original state space. Each of these estimators then inherit their respective performance guarantees from the point estimation counterparts: something lacking from the traditional two-step approach of estimating the initial condition before solving the system. The developments in this work will help to provide statistical guarantees on trajectory estimation algorithms, as well as enable the development of new prediction algorithms which include differential equation constraints. REFERENCES Andrew R. Barron. Complexity regularization with application to artificial neural networks. In George Roussas (ed.), *Nonparametric Functional Estimation and Related Topics*, pp. 561–576. Springer Netherlands, Dordrecht, 1991. ISBN 978-94-011-3222-0. doi: 10.1007/978-94-011-3222-0_42. Peter J. Bickel and Yulia R. Gel. Banded regularization of autocovariance matrices in application to parameter estimation and forecasting of time series. *Journal of the Royal Statistical Society: Series B (Statistical Methodology)*, 73(5):711–728, 2011. doi: 10.1111/j.1467-9868.2011.00779.x. Marin Biloš, Johanna Sommer, Syama Sundar Rangapuram, Tim Januschowski, and Stephan Günnemann. Neural flows: Efficient alternative to neural odes. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 21325–21337. Curran Associates, Inc., 2021. Cristian Challu, Kin G. Olivares, Boris N. Oreshkin, Federico Garza, Max Mergenthaler, and Artur Dubrawski. N-hits: Neural hierarchical interpolation for time series forecasting. *CoRR*, abs/2201.12886, 2022. Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. Xing Chen, Flavio Abreu Araujo, Mathieu Riou, Jacob Torrejon, Dafiné Ravelosona, Wang Kang, Weisheng Zhao, Julie Grollier, and Damien Querlioz. Forecasting the outcome of spintronic experiments with neural ordinary differential equations. *Nature Communications*, 13(1):1016, Feb 2022. ISSN 2041-1723. doi: 10.1038/s41467-022-28571-7. Xinyu Chen and Lijun Sun. Bayesian temporal factorization for multidimensional time series prediction. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 44(9):4659–4673, 2022. doi: 10.1109/TPAMI.2021.3066551. Emilien Dupont, Arnaud Doucet, and Yee Whye Teh. Augmented neural ODEs. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. Chenyou Fan, Yuze Zhang, Yi Pan, Xiaoyue Li, Chi Zhang, Rong Yuan, Di Wu, Wensheng Wang, Jian Pei, and Heng Huang. Multi-horizon time series forecasting with temporal attention learning. In *Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*, KDD ’19, pp. 2527–2535, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450362016. doi: 10.1145/3292500.3330662. Chris Finlay, Joern-Henrik Jacobsen, Levon Nurbekyan, and Adam Oberman. How to train your neural ODE: the world of Jacobian and kinetic regularization. In Hal Daumé III and Aarti Singh (eds.), *Proceedings of the 37th International Conference on Machine Learning*, volume 119 of *Proceedings of Machine Learning Research*, pp. 3154–3164. PMLR, 13–18 Jul 2020. Arnab Ghosh, Harkirat Behl, Emilien Dupont, Philip Torr, and Vinay Namboodiri. Steer : Simple temporal regularization for neural ode. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 14831–14843. Curran Associates, Inc., 2020. Javier González, Ivan Vujačić, and Ernst Wit. Reproducing kernel hilbert space based estimation of systems of ordinary differential equations. *Pattern Recognition Letters*, 45:26–32, 2014. ISSN 0167-8655. doi: 10.1016/j.patrec.2014.02.019. Samuel Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019.
O2jyuo89CK
Assuming normalized coordinates in $[-1, 1] \times [1, 1]$ , how does it make sense for the denoising network to take a random stroke at (-1, -1) and try to denoise it to (1,1), instead of just recognizing (with hungarian matching) or simple euclidean distance that it is much easier to denoise the stroke closer to the final location?
MODELING COMPLEX VECTOR DRAWINGS WITH STROKE CLOUDS Alexander Ashcroft¹ Ayan Das¹ Yulia Gryaditskaya¹,² Zhiyu Qu¹ Yi-Zhe Song¹,² ¹SketchX, CVSSP, University of Surrey, UK ²Surrey Institute for People-Centred AI (PAI), UK ABSTRACT Vector representations offer scalability, editability, and storage efficiency, making them indispensable for a wide range of digital applications. Yet, generative models for vector drawings remain under-explored, in particular for modeling complex vector drawings. This is in part due to the primarily sequential and auto-regressive nature of existing approaches failing to scale beyond simple drawings. In this paper, we introduce a generative model for complex vector drawings, representing them as “stroke clouds” – sets of arbitrary cardinality comprised of n-dimensional Bézier curves. Stroke dimensionality is a design choice that allows the model to adapt to different levels of sketch complexity. We learn to encode this set of strokes into compact latent codes by a probabilistic reconstruction procedure based on De Finetti’s Theorem of Exchangeability. A generative model is then defined over the latent vectors of the encoded stroke clouds. Thus, the resulting “Latent stroke cloud generator (LSG)” captures the distribution of complex vector drawings in an implicit set space. We demonstrate the efficacy of our model in the generation of complex Anime line-art drawings. 1 INTRODUCTION Figure 1: Our model allows us to perform both probabilistic reconstruction of existing samples and generation of new samples matching a training data distribution. Reconstruction: We encode a vector drawing (left) as a set of Bézier curves and then probabilistically decode it with an MLP-based diffusion model to recreate the drawing (right). Generation: Through the use of a latent diffusion model we generate latent codes, and then decode them into vector drawings. The rise of Diffusion Probabilistic Models (DPMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020a) and their spectacular performance on conditional image generation (Rombach et al., 2022; Saharia et al., 2022) has prompted the emergence of the subfield of creative visual intelligence. However, diffusion-based generation targets raster images and leaves drawings, sketches, and other forms of “chirographic data”¹ underrepresented. Vector representations are scalable, editable, and storage efficient – properties that are beneficial in digital use cases. Such vector representations, however, due to their variable size nature, are mostly (with notable exceptions, e.g. Jain et al. (2023)) incompatible with the current image generation pipelines. It is to be noted that modeling chirographic vector modalities have been attempted before (Ha & Eck, 2018; Carlier et al., 2020; Aksan et al., 2020). ¹The term coined by Das et al. (2022; 2023) Ribeiro et al., 2020; Lopes et al., 2019), but at a smaller scale and complexity. In this paper, we attempt to design a generative model directly on vector drawings, specifically some complex ones as depicted in Fig. 1. Prior attempts at modeling vector drawings were limited to sequential representations (Ha & Eck, 2018; Aksan et al., 2020; Ribeiro et al., 2020). The sequence elements used were discretized 2D points (Ha & Eck, 2018; Ribeiro et al., 2020), SVG command tokens (Carlier et al., 2020; Lopes et al., 2019) and individual strokes (Aksan et al., 2020; Das et al., 2020b; Chowdhury et al., 2022). Generative models based on sequential representations were demonstrated only for drawings and sketches significantly simpler in complexity than the ones we target in this work. This can be attributed to the inability of sequential models (i.e. RNNs as in Graves (2013), causal Transformers Ribeiro et al. (2020); Aksan et al. (2020)) to handle long sequences. A challenge of modeling long-range dependencies is a well-known drawback (Pascanu et al., 2013), of such models, even beyond chirographic data, in tasks such as scene synthesis (Paschalidou et al., 2021; Para et al., 2023). In this paper, we target a generative model for vector drawings, specifically focusing on complex drawings. To this end, we abandon the sequential approaches. We propose to model highly complex drawings as a set of its constituent strokes – denoting this data structure as “Stroke Cloud”. Each stroke can be theoretically represented by any available vector format, such as simple 2D polylines (Ha & Eck (2018)), Bézier curves, or differential geometric embeddings (e.g. Aksan et al. (2020)). We here chose Bézier curves for their ubiquitous use in vector art. We then define our generative model on the space of stroke clouds embeddings (sets embeddings), mitigating the long-range dependency issues of sequential representations. More formally, we define a generative model over a set $X$ of arbitrary cardinality. Note that generative models are typically defined on the Euclidean vector space $\mathbb{R}^N$ and do not naturally extend to set spaces, which have element invariance properties. However, the foundation for our set-based generative model has been laid by the recent work of Zaheer et al. (2017), and our solution is supported by De-Finetti’s Theorem. It follows that we can represent a set as a product of independent conditional distributions over the elements, given a latent embedding $z = E(X)$ of the entire set (Lee et al., 2019; Zaheer et al., 2017). We can then proceed to build a generative model $p_\theta(z)$ over the set embeddings. We show that such models efficiently scale up to complex drawings and generate plausibly-looking samples. Our main contributions can be summarized as follows: (i) We introduce the first generative model for complex vector drawings. Yet, our approach can be used for simpler drawings and any other form of “chirographic data”, as we demonstrate in the Appendix. (ii) We define our generative model over complex drawings with a novel “stroke cloud” representation, which is a set of its constituent strokes. To this end, we learn set embeddings for each set (stroke cloud) to facilitate the downstream generative model. The code and the data are available at https://github.com/Co-do/Stroke-Cloud. 2 RELATED WORKS Sketch Generation First, we focus on the most related recent works for sketch representation and generation. Generating freehand-like sketches from reference images or textual captions remains a challenging task. When generating sketches from images, methods often do not take the abstraction of concepts or objects present in freehand sketches into account, and the produced sketches are some forms of edgemaps (Xie & Tu, 2015; Li et al., 2019; Chan et al., 2022). Alternatively, there is a wide range of sketch generation work based on authentic sketches ranging from creative sketch generation (Ge et al., 2020), shading sketch generation (Li et al., 2020), image-to-sketch translation (Liu et al., 2020) and face sketch synthesis (Wang et al., 2020; Gao et al., 2023). Most of the models are based on a raster sketch representation, which does not reflect the stroke-based nature of authentic drawing that vector sketches do. To model sketches as a sequence of strokes, sequential and auto-regressive approaches have been used (Ha & Eck, 2018; Zhang et al., 2017). Further, methods for generating vector sketches include SketchHealer (Su et al., 2020), which is a graph-to-sequence approach, SketchPix2Seq (Chen et al., 2017) a CNN-based approach for vector sketches, SketchODE (Das et al., 2022) a neural ODE based approach and a Bézier curve based approach (Das et al., 2020b). More recently a denoising probabilistic-based method, SketchKnitter (Wang et al., 2022), has demonstrated state-of-the-art performance in generating simple vector sketches. Despite the impressive performance the requirement to represent all training data with a fixed number of points can prove limiting to wider applications of their method. A recent work (Carlier et al., 2020) attempted to forcefully impose permutation invariance on stroke-sequences via Hungarian matching (Kuhn, 1955) – a cubic time algorithm (i.e. \(O(n^3)\)) that is hard to scale beyond low cardinality samples. CLIPascene (Vinker et al., 2022) generates vector sketches from reference images, relying on a per-image optimization approach. **Point clouds** In this work, rather than representing sketches as a sequence of strokes, we represent a sketch as a set of (unordered) strokes. This problem bears similarity with point clouds generation and representation. Early point cloud works (Achlioptas et al., 2018; Gadelha et al., 2018) represented point clouds as fixed-sized matrices which enabled existing generative models to be easily applied to the problem. AtlasNet (Groueix et al., 2018) and FoldingNet (Yang et al., 2018) learned a mapping from patches in two-dimensional space to three-dimensional space which was able to deform the two-dimensional patches into point cloud shapes. These methods allowed for the generation of point clouds of variable size while also ensuring permutation invariance. An alternative approach is to consider a point cloud as a distribution over three-dimensional points. A range of different approaches utilize likelihood-based methods for generating point clouds of variable size. Thus, PointFlow (Yang et al., 2019) and DFP-Net (Klokov et al., 2020) utilize normalizing flows to model the distribution of points, while PointGrow (Sun et al., 2020) employs an auto-regressive model. More recently, (Luo & Hu, 2021) presented a probabilistic diffusion model-based approach, where they condition a diffusion model on a latent representation of the point cloud and probabilistically reconstruct point clouds. Colored point cloud generation (Wu et al., 2023) builds on the possible use cases for point clouds as it expands the complexity of the information the point cloud can represent. Our **stroke-cloud** approach seeks to leverage the flexibility provided by recent point cloud works but focuses on how to model more complex elements than 2D/3D point elements. Namely, we focus on how to model vector sketches consisting of a large number of strokes of diverse shapes. ### 3 METHOD Our generative method consists of two modules: (i) the **Stroke cloud Representation Module (SRM)**, comprised of a Set Transformer Lee et al. (2019) as an encoder and a conditional MLP-based diffusion model as the decoder, and (ii) the **Latent Stroke cloud Generator (LSG)**. The SRM module serves as an encoder-decoder, and combined with the LSG module it allows us to generate new drawings representative of the training dataset. The latent code generated by the LSG is decoded by the SRM into \(N\) individual strokes, where \(N\) is a hyperparameter. We are therefore able to probabilistically reconstruct complex drawings with a variable number of strokes. #### 3.1 Drawing representation We represent a drawing in our dataset \(D\) as a set of strokes \(S = \{s^{(1)}, s^{(2)}, s^{(3)}, \ldots, s^{(N)}\} \in D\). Note that the cardinality, \(N\), of a set \(S\) (the number of strokes in a line drawing) varies across drawings in the training data. We represent each stroke in the drawing as a Bézier curve. Unless specified otherwise, we use quadratic Bézier curves, represented as follows: \(s^{(i)} = (x_1, y_1, x_2, y_2, x_3, y_3)\), where each pair of \((x_i, y_i)\) are the coordinates of the \(i\)-th control point. While these relatively simple strokes lack the complexity of authentic hand-drawn strokes they can be used to represent complex drawings. For more information on stroke design and usage of Bézier curves of higher degrees please refer to the Appendix E. #### 3.2 Set representation module We model our Set Representation Module (SRM) as a generative conditional model. As a generative model, we use a Denoising Diffusion Probabilistic Model (DDPM) (Ho et al., 2020a). The training objective can be formulated as follows: $$\max_{\Psi} \sum_{S \in D} \log p(\Psi)(S),$$ where $\Psi$ are trainable parameters. **Stroke cloud joint probability distribution** It is challenging to define a generative model directly on the set space $S$ due to the varying number of strokes in each line drawing. A solution that we explore here is to first learn to transform a set $S$ into a latent representation. Our solution is inspired by the work by Zaheer et al. (2017) and De-Finetti’s Theorem of Exchangeability stating that a set (exchangeable sequences of random variables) can be modeled as a probability distribution over its constituent elements given some latent representation of the set. Therefore, we decompose a drawing $S$ into conditionally independent parametric density functions of the individual strokes $s \in S$, conditioned on a latent embedding $z$ of a drawing $S$: $$p_{\{\theta, \phi\}}(S) = \prod_{s \in S} p_{\theta}(s | z = E_{\phi}(S)),$$ where $E_{\phi}(S)$ is the drawing encoder into a latent space with learnable parameters $\phi$. Now, we can consider strokes as I.I.D., and can rewrite the training objective in Eq. (1) as follows: $$\max_{\{\theta, \phi\}} \sum_{S \in D} \log p_{\{\theta, \phi\}}(S) = \max_{\{\theta, \phi\}} \sum_{S \in D} \sum_{s \in S} \log p_{\theta}(s | z = E_{\phi}(S)).$$ **Training** As an approximation to the true log-likelihood in Eq. (3), we optimize a noise-estimator model trained on noisy versions of strokes $s_t$ at diffusion timestep (noise level) $t$: $$\min_{\{\theta, \phi\}} \mathbb{E}_{S \in D} \left[ \mathbb{E}_{s \in S, \epsilon \sim N(0, I), t \sim U(1, T)} \left[ ||\epsilon_{\theta}(s_t, t | z = E_{\phi}(S)) - \epsilon||_2^2 \right] \right],$$ where $\epsilon_{\theta}(\cdot)$ is the conditional noise-estimator. A noisy stroke, $s_t$, is obtained as follows: $s_t = \sqrt{\alpha_t}s + \sqrt{1 - \alpha_t}\epsilon$ for all timesteps $t \in [1, T]$, where $\alpha_t \in [0, 1]$ is a monotonically decreasing diffusion schedule and $\epsilon \in R^{6 \times 1}$. The noise is added to each control point of a stroke. For more detail on the standard Diffusion Model formulation, please refer to appendix F. **The stroke cloud encoder $E_{\phi}$** The stroke cloud encoder is an important element of the SRM, as it enables the representation of strokes as independent random variables. Due to its theoretically guaranteed permutation-invariant nature, we use a Set Transformer with Pooling by Multihead Attention (PMA) (with one seed vector) proposed by Lee et al. (2019) in order to encode a given set $S$ into a compact latent code $z$. **Reconstructing the stroke cloud** Given a trained noise estimator with the parameters $\theta^*$ and a trained stroke cloud encoder such that $z = E_{\phi^*}(S)$, we can decode the set by running any diffusion sampler: $$\hat{S} = \left\{ \hat{s}^{(j)} := \text{SAMPLER}(\epsilon_{\theta^*}, z) \mid j \in [1, N] \right\}$$ where SAMPLER($\cdot$) is any sampling procedure compatible with DDPM training (Ho et al., 2020a). We discuss the choice of the sampler in more detail in Sec. 4. Note that since we assume the strokes to be independent and identically distributed random variables, the model is not aware of the set’s cardinality $N$, which is the number of strokes in a sketch. However, due to the presence of the expectation $\mathbb{E}_{s \in S}$ in Eq. (3), the model does implicitly encode the relative importance of each stroke. We treat the cardinality $N$ of the reconstructed set $\hat{S}$ as a hyperparameter, and discuss it in more detail in Sec. 4. In Sec. 4, we show that both the hyperparameter $N$ and the exact sampling procedure influence the visual quality of the reconstructed drawing. ### 3.3 Latent Stroke cloud Generator To enable unconditional generation, we leverage a latent generative model that we term “Latent Stroke cloud Generator” (LSG). To train the LSG model, we extract embeddings of the drawings in our dataset $D$: $$D = \left\{ z := E_{\phi^*}(S) \mid S \in D \right\}$$ We provide more detail on De-Finetti’s Theorem of Exchangeability in the Appendix G. Figure 2: An illustration of the reverse diffusion process for a set of 1000 strokes with a DDIM sampling method. The original drawing was comprised of 350 strokes. Repeated strokes are ‘re-drawn’ on top of one another. LSG is then a simple generative model defined over the latent vectors \( z \): \( p_\psi(z) \) with trainable parameters \( \psi \). Just like in Sec. 3.2, we realize \( p_\psi(z) \) using a diffusion model. Specifically, we train a parametric noise estimator \( \epsilon_\psi \) on the noisy latents \( z_t = \sqrt{\alpha_t}z + \sqrt{1 - \alpha_t}\epsilon \). This estimator estimates the noise component \( \epsilon \): \[ \min_\psi \mathbb{E}_{z \in D, \epsilon \sim N(0, I), t \sim U(1, T)} \left[ ||\epsilon_\psi(z_t, t) - \epsilon||_2^2 \right]. \] (7) 4 EXPERIMENTS & RESULTS 4.1 DATASET PREPARATION A key challenge in generative modeling for complex vector drawings is acquiring a sufficiently large dataset. Due to the unavailability of such datasets, much of chirographic modeling (Ha & Eck, 2018; Aksan et al., 2020; Das et al., 2023; Wang et al., 2022; Das et al., 2022) has focused on the QuickDraw dataset, which contains a vast number of very simple drawings. For reference, the average number of strokes in sketches from QuickDraw is 5. To demonstrate the effectiveness of our stroke cloud-based sketch generation framework in generating complex vector sketches, we synthetically generate a new dataset that we name Anime-Vec10k, derived from the Danbooru2019 dataset (Branwen et al., 2019) of anime raster images. We then use this dataset to train our model. The Danbooru2019 image database comprises 3.69 million anime-style artworks in raster format, along with over 106 million annotations. To create our dataset, we randomly select 10,000 samples from a subset of Danbooru 2019 Portraits, which are portraits cropped from the original Danbooru dataset. We then transform these samples into line drawings using a style-transfer Generative Adversarial Network (GAN) as described in Chan et al. (2022). Finally, we utilize a line art vectorizer by Mo et al. (2021) to convert these synthetic line drawings into complex vector sketches, consisting of quadratic Bézier curves. This process is illustrated in Fig. 3. For more details, please refer to Appendix B. On average, the sketches in our dataset consist of 305 strokes. Figure 3: To generate the Anime-Vec10k dataset we take an original image from the Danbooru 2019 Portrait dataset and use a style GAN to convert it to ‘sketch style’. We then apply a vectorizer to generate a set of quadratic Bezier curves. | Sampler and stroke number | DDIM 100 | DDIM 500 | DDIM 1000 | DDIM 5000 | DDPM 1000 | |---------------------------|----------|----------|-----------|-----------|-----------| | FID | 191 | 34 | 9.8 | 11.3 | 58 | Table 1: Quantitative comparison of drawings generated under different sampling conditions. DDIM sampling was done with 30 steps while the DDPM sampling was done with 1000 steps. 4.2 Strokes representation and embedding As we introduced in Sec. 3.1, in our framework, each stroke in a drawing is represented as \( s^{(i)} = (x_1, y_1, x_2, y_2, x_3, y_3) \), where each pair of \( x_i, y_i \) denotes the control points of a quadratic Bézier curve. Using a Set Transformer (Lee et al., 2019), we encode each drawing as a set of variable cardinality, with each stroke as an element in the set. The resulting latent code conditions an MLP-based diffusion model, which learns to generate elements of the set. To address spectral bias in the MLP (Tancik et al., 2020), we employ a sinusoidal embedding for each control point coordinate. Increasing the dimensionality of the sinusoidal embedding helps to mitigate the spectral bias, as can be seen in Fig. 4. ![Figure 4](image) Figure 4: Varying the dimensionality of the sinusoidal positional embedding can have a significant impact on the drawing quality. While the general position of each stroke in the low-dimensional embedding drawings is correct they lack the fine-grained accuracy of the drawings with the higher-dimensional embedding. 4.3 Strokes sampling The variability in the number of strokes within drawings poses challenges in vector drawing generation. While existing methods often employ autoregressive techniques or fix the number of strokes or polyline points, our approach utilizes a more flexible probabilistic reconstruction process. This approach allows us to learn compact latent representations while effectively reconstructing these latent codes as observable strokes. After training, we can condition the SRM with a given latent code to generate samples and attempt to reconstruct the encoded drawing. However, the complexity of the drawing is generally unknown, introducing the challenges of both over-sampling and under-sampling. For more detailed information on probabilistic reconstruction, please refer to Appendix D. **Over-sampling:** When we generate a significantly larger number of samples than the original number of strokes in the drawing, over-sampling can occur. This is illustrated in the leftmost drawing of Fig. 5. The generative process may result in particular strokes being sampled more frequently, leading to slight variations and noise in some sections of the drawing. Overall, the drawing quality remains largely unchanged, with most strokes being ‘redrawn’ on top of one another. **Under-sampling:** On the other hand, under-sampling involves generating too few strokes, resulting in a sparsely populated canvas, as seen in the right-hand drawings of Fig. 5. Under-sampling significantly impacts the quality of the drawing. Table 1 shows the effect of varying the number of generated strokes on the visual quality of the drawing as measured by the FID. These results confirm that the effect of under-sampling on the visual quality is significantly more severe than over-sampling. **Sampler:** The choice of sampling method can also influence the quality and style of the drawing. Fig. 6 demonstrates the effects of varying the sampling method. Our reconstructive method may re-sample some strokes multiple times, affecting the variance in each sample and, consequently, the Figure 5: The number of generated samples has a significant impact on the generated drawing. With only 100 samples (right) the drawing is sparsely populated by strokes and key features are missing. However, with 1000 samples (left) a more accurate reconstruction is achieved. Figure 6: The stochasticity of the sampling method has a significant impact on the quality of the drawings. We utilize a DDIM sampler (left) which is the most resilient to the re-sampling problem. We increase the stochasticity of the sampling process by using a DDPM sampler and multiplying the variance by a scale factor (Das et al., 2023). The scale factor is displayed beneath each drawing. drawing’s appearance. Using a deterministic DDIM sampler produces drawings with clean edges and minimally noticeable re-sampling. The small variance in re-sampled strokes is concealed by the line’s thickness. However, using a stochastic sampling method creates a shading effect similar to an artist sketching a composition. To control this effect, we adjust the variance in the de-noising step using a scale factor (Das et al., 2023), increasing or decreasing the stochasticity of the process. Fig. 6 illustrates the impact of this scale factor on the drawing’s style. 4.4 Generation To generate a drawing we must first generate a latent code with the LSG, this is done using a DDIM sampler and 30 time steps. We decode the resultant latent vector with the SRM, with a DDIM sampler and 30 time steps, into a drawing comprised of 1000 strokes. Drawings generated by our model are shown in Fig. 7. To generate sketches in this figure, we generate the number of strokes much larger than the average number of strokes in the training dataset. While selecting the correct number of samples is an important choice in generating complete and highly complex drawings, if a drawing is incomplete we do have the option of appending more generated samples using the same latent code. Figure 7: Drawings of 1000 strokes created by decoding latent vectors generated by the LSG with the SRM 4.5 INTERPOLATION The SRM plays a crucial role in reconstructing drawings based on a given latent condition. Therefore, it is essential to assess the model’s robustness to different conditions. On the other hand, the LSG’s primary function is to generate a latent code that the SRM can decode successfully. **SRM:** The SRM does not operate unconditionally; instead, it relies on being conditioned by a latent stroke cloud to perform the reconstruction. Consequently, it needs to be resilient to variations in codes generated by the LSG. As illustrated in Fig. 8, when we interpolate between two encoded stroke clouds, the SRM retains semantic features from each stroke cloud, even for conditions it has not encountered previously. **LSG:** The LSG serves as the core generative component of our model, providing the essential latent code for the SRM to work with. Fig. 9 demonstrates that it is possible to interpolate between two Figure 8: Interpolating between encoded drawings leads to a gradual morphing between drawings. While noisy in the middle of the transition, clear features are still distinguishable. Figure 9: Two random vectors were generated and interpolated between. Each vector was used as the initial vector in the LSG denoising process. randomly selected noisy vectors, resulting in distinct drawings. Furthermore, we show in Fig. 10 the impact of gradually adding noise to the originally selected random vector. The sketches obtained from the noised latent vectors are also distinct and plausible. Figure 10: A random latent vector was sampled, and then Gaussian noise was added gradually to obtain a new latent. The amount of noise added increases from left to right. 5 DISCUSSION In this paper, we propose modeling complex vector drawings as sets of structurally complex elements. We learn to embed these drawings into compact latent codes. These latent codes then condition an MLP-based diffusion model that enables the efficient generation of highly complex vector drawings through a latent diffusion process, supported by De-Finetti’s Theorem of Exchangeability. One limitation of our approach is the unknown a priori number of strokes to sample. However, we have shown that oversampling produces visually pleasing sketches in which some strokes overlap. Such strokes can be potentially removed in post-processing by analyzing the areas of overlaps. Limited by the lack of datasets of complex vector drawings, we trained on synthetic data. However, the strokes produced by the automatic vectorizer are shorter than those of hand-drawn sketches. In the supplementary, we provide additional results, showing how our method can support more complex strokes by increasing the number of control points in our stroke representation. Moreover, we also show in the supplementary how additional attributes such as stroke width can be supported by our framework. In summary, we proposed the first approach to model complex vector drawing in a generative context. Our code and the data are available at https://github.com/Co-do/Stroke-Cloud. REFERENCES Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas. Learning representations and generative models for 3d point clouds. In *International conference on machine learning*, pp. 40–49. PMLR, 2018. Emre Aksan, Thomas Deselaers, Andrea Tagliasacchi, and Otmar Hilliges. Cose: Compositional stroke embeddings. *NeurIPS*, 2020. Gwern Branwen, Danbooru community, and Anonymous. Danbooru2019: A large-scale crowdsourced and tagged anime illustration dataset, 2019. Alexandre Carlier, Martin Danelljan, Alexandre Alahi, and Radu Timofte. Deepsyg: A hierarchical generative network for vector graphics animation. *NeurIPS*, 2020. Caroline Chan, Frédo Durand, and Phillip Isola. Learning to generate line drawings that convey geometry and semantics. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 7915–7925, 2022. Yajing Chen, Shikui Tu, Yuqi Yi, and Lei Xu. Sketch-pix2seq: a model to generate sketches of multiple categories. *arXiv preprint arXiv:1709.04121*, 2017. Pinaki Nath Chowdhury, Aneeshan Sain, Ayan Kumar Bhunia, Tao Xiang, Yulia Gryaditskaya, and Yi-Zhe Song. Fs-coco: Towards understanding of freehand sketches of common objects in context. In *European Conference on Computer Vision*, pp. 253–270. Springer, 2022. Ayan Das, Yongxin Yang, Timothy Hospedales, Tao Xiang, and Yi-Zhe Song. Béziersketch: A generative model for scalable vector sketches. In *Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVI 16*, pp. 632–647. Springer, 2020a. Ayan Das, Yongxin Yang, Timothy Hospedales, Tao Xiang, and Yi-Zhe Song. Béziersketch: A generative model for scalable vector sketches. In *ECCV*, 2020b. Ayan Das, Yongxin Yang, Timothy M Hospedales, Tao Xiang, and Yi-Zhe Song. Sketchode: Learning neural sketch representation in continuous time. In *Tenth International Conference on Learning Representations 2022*, 2022. Ayan Das, Yongxin Yang, Timothy Hospedales, Tao Xiang, and Yi-Zhe Song. Chirodiff: Modelling chirographic data with diffusion models. In *The Eleventh International Conference on Learning Representations*, 2023. Matheus Gadelha, Rui Wang, and Subhransu Maji. Multiresolution tree networks for 3d point cloud processing. In *Proceedings of the European Conference on Computer Vision (ECCV)*, pp. 103–118, 2018. Fei Gao, Yifan Zhu, Chang Jiang, and Nannan Wang. Human-inspired facial sketch synthesis with dynamic adaptation. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 7237–7247, 2023. Songwei Ge, Vedanuj Goswami, C Lawrence Zitnick, and Devi Parikh. Creative sketch generation. *arXiv preprint arXiv:2011.10039*, 2020. Alex Graves. Generating sequences with recurrent neural networks. *CoRR*, abs/1308.0850, 2013. Thibault Groueix, Matthew Fisher, Vladimir G Kim, Bryan C Russell, and Mathieu Aubry. A papier-mâché approach to learning 3d surface generation. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 216–224, 2018. David Ha and Douglas Eck. A neural representation of sketch drawings. In *ICLR*, 2018. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In *NeurIPS*, 2020a.
auKAUJZMO6
In section 3.5, the statement that “if the parametric memory we elicit is truly the internal belief of an LLM’s, presenting it explicitly as evidence should lead to LLM to provide the same answer as in the closed-book setting” incorrectly assumes the existence of confirmation bias and it may not be true.
Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts Jian Xie∗∗ Kai Zhang∗∗ Jiangjie Chen∗ Renze Lou♡ Yu Su∗ ∗School of Computer Science, Fudan University ∗∗The Ohio State University ♡The Pennsylvania State University jianxie22@m.fudan.edu.cn, {zhang.13253, su.809}@osu.edu Abstract By providing external information to large language models (LLMs), tool augmentation (including retrieval augmentation) has emerged as a promising solution for addressing the limitations of LLMs’ static parametric memory. However, how receptive are LLMs to such external evidence, especially when the evidence conflicts with their parametric memory? We present the first comprehensive and controlled investigation into the behavior of LLMs when encountering knowledge conflicts. We propose a systematic framework to elicit high-quality parametric memory from LLMs and construct the corresponding counter-memory, which enables us to conduct a series of controlled experiments. Our investigation reveals seemingly contradicting behaviors of LLMs. On the one hand, different from prior wisdom, we find that LLMs can be highly receptive to external evidence even when that conflicts with their parametric memory, given that the external evidence is coherent and convincing. On the other hand, LLMs also demonstrate a strong confirmation bias when the external evidence contains some information that is consistent with their parametric memory, despite being presented with conflicting evidence at the same time. These results pose important implications that are worth careful consideration for the further development and deployment of tool- and retrieval-augmented LLMs. Resources are available at https://github.com/OSU-NLP-Group/LLM-Knowledge-Conflict 1 Introduction After pre-training on massive corpora, large language models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; Ouyang et al., 2022; OpenAI, 2022, 2023; Zeng et al., 2023; Touvron et al., 2023a) have formed a wealth of parametric memory, such as commonsense and factual knowledge (Petroni et al., 2019; Li et al., 2022; Zhao et al., 2023). However, such parametric memory may be inaccurate or become outdated (Liska et al., 2022; Luu et al., 2022) due to misinformation in the pre-training corpus or the static nature of parametric memory, known to be a major cause for hallucinations (Elazar et al., 2021; Shuster et al., 2021; Ji et al., 2023). Tool (Schick et al., 2023; Qin et al., 2023) or retrieval augmentation (Mallen et al., 2022; Shi et al., 2023b; Ram et al., 2023) has emerged as a promising solution by providing external information as new evidence to LLMs, such as ChatGPT Plugins and New Bing. However, external evidence, inevitably, could conflict with LLMs’ parametric memory. We refer to external evidence that conflicts with parametric memory as counter-memory. In this paper, we seek to answer the question: how receptive are LLMs to external evidence, especially counter-memory? A solid understanding of this question is an essential stepping stone for wider application of tool-augmented LLMs. Not only does this relate to overcoming the limitations of LLM’s static parametric memory, but it is also associated ∗The first two authors contributed equally. Work done during Jian Xie’s internship at OSU NLP Group. 1In the rest of the paper we use “tool-augmented LLMs” because retrievers are one type of tools, but tools are not limited to retrievers (consider, e.g., a question answering tool). with direct safety concerns. For example, what if a third-party tool, either by the developer or hijacked by attackers, intentionally returns disinformation? Will LLMs be deceived? We present the first comprehensive and controlled investigation into the behavior of LLMs when encountering counter-memory. A key challenge lies in how to construct the counter-memory. Prior work employs various heuristics, such as negation injection (Niu & Bansal [2018], Kassner et al. [2021], Gubelmann & Handschuh [2022]) and entity substitution (Longpre et al. [2021], Zhou et al. [2023]), and finds that language models (both large and small) tend to be stubborn and cling to their parametric memory. However, such heuristic word-level editing results in incoherent counter-memory (see an example in Section 4.1), which may make it trivial for LLMs to detect and thus neglect the constructed counter-memory. It is unclear how the prior conclusions translate to real-world scenarios, where counter-memory is more coherent and convincing. We propose a systematic framework to elicit the parametric memory of LLMs and construct the corresponding counter-memory. We design a series of checks, such as entailment from parametric memory to the answer, to ensure that the elicited parametric memory is indeed the LLM’s internal belief. For the counter-memory, instead of heuristically editing the parametric memory, we instruct an LLM to directly generate a coherent passage that factually conflicts with the parametric memory. After obtaining a large pool of parametric memory and counter-memory pairs, we then examine LLMs’ behavior in different knowledge conflict scenarios, including 1) when only counter-memory is present as external evidence and 2) when both parametric memory and counter-memory are present. Our investigation leads to a series of interesting new findings. We highlight the following: - **LLMs are highly receptive to external evidence** if that is the only evidence, even when it conflicts with their parametric memory. This contradicts the prior wisdom (Longpre et al. [2021]), and we attribute this to the more coherent and convincing counter-memory constructed through our framework. On the other hand, this also suggests that LLMs may be easily deceived by, e.g., disinformation from malicious (third-party) tools. - However, with both supportive and contradictory evidence to their parametric memory, LLMs show a strong confirmation bias (Nickerson [1998]) and tend to cling to their parametric memory. This reveals a potential challenge for LLMs to unbiasedly orchestrate multiple pieces of conflicting evidence, a common situation encountered by generative search engines. ## Related Work ### Parametric Memory in Language Models After pre-training, language models have internalized a vast amount of knowledge into their parameters (Roberts et al. [2020], Jiang et al. [2020]), also known as parametric memory. Many past studies have explored the elicitation of parametric memory in language models, such as commonsense or factual knowledge probing (Petroni et al. [2019], Lin et al. [2020], Zhang et al. [2021], West et al. [2022], Chen et al. [2023], Wang et al. [2023]). Such parametric memory could help solve downstream tasks (Wang et al. [2021], Yu et al. [2023], Sun et al. [2023]). However, previous work has discovered that language models only memorize a small portion of the knowledge they have been exposed to during pre-training (Carlini et al. [2021], [2023]) due to model’s limited memorization abilities. In addition, the parametric memory may become outdated (Lazaridou et al. [2021], De Cao et al. [2021]). Such incorrect and outdated parametric memory may show as hallucinations (Elazar et al. [2021], Shuster et al. [2021], Ji et al. [2023]). Although some methods are proposed to edit knowledge in language models (Dai et al. [2022], Meng et al. [2022], [2023]), they typically require additional modifications on model weights without evaluating the consequences on models’ other aspects such as performances and are limited to factual knowledge. ### Tool-augmented Language Models To address the limitations of parametric memory, external tools such as retrievers are used to augment language models with up-to-date information, namely tool-augmented (Nakano et al. [2021], Yao et al. [2023], Qin et al. [2023], Schick et al. [2023], Lu et al. [2023]) or retrieval-augmented (Guu et al. [2020], Khambelwal et al. [2020], Izacard & Grave [2021], Borgeaud et al. [2022], Zhong et al. [2022]) language models. Such a framework, which has proven its efficacy in enhancing large language models (Shi et al. [2023b], Ram et al. [2023], Mallen et al. [2022]), is adopted in real-world applications such as New Bing and ChatGPT Plugins. Inevitably, the external evidence could conflict with the parametric memory. However, the behavior of LLMs in knowledge conflict scenarios remains under-explored, and unraveling it holds significance for wider applications of tool-augmented LLMs. Knowledge Conflict To perform controlled experiments, knowledge conflict is often simulated with counter-memory constructed upon parametric memory. Heuristic counter-memory construction methods such as negation injection (Niu & Bansal, 2018; Kassner et al., 2021; Petroni et al., 2020; Pan et al., 2021) have been developed. Furthermore, entity substitution (Longpre et al., 2021; Chen et al., 2022; Si et al., 2023; Zhou et al., 2023) replaces all mentions of the answer entity in parametric memory with other entities to construct counter-memory. However, these methods are limited to word-level editing, leading to low overall coherence in the counter-memory. We instead instruct LLMs to generate counter-memory from scratch to ensure high coherence. 3 EXPERIMENTAL SETUP In this section, we describe our framework for eliciting high-quality parametric memory from LLMs and constructing the corresponding counter-memory, as well as the evaluation metrics. 3.1 DATASETS Following prior work (Longpre et al., 2021; Chen et al., 2022), we adopt question answering (QA) task as the testbed for knowledge conflict experiments. In addition to an entity-based QA dataset (POPQA), we include a multi-step reasoning dataset (STRATEGYQA) for diversifying the questions studied in the experiments. Specifically, - **POPQA** (Mallen et al., 2022) is an entity-centric QA dataset that contains 14K questions. Data for POPQA originates from triples in Wikidata. Employing custom templates tailored to relationship types, the authors construct questions through the substitution of the subject within knowledge triples. POPQA defines the popularity of a question based on the monthly Wikipedia page views associated with the entity mentioned in the question. - **STRATEGYQA** (Geva et al., 2021) is a multi-step fact reasoning benchmark that necessitates the implicit question decomposition into reasoning steps. The questions are built around Wikipedia terms and cover a wide range of strategies, which demand the model’s capability to select and integrate relevant knowledge effectively. The language model is expected to provide a True or False answer. Table 1: The correctness of LLMs responses in closed-book QA fashion (Step 1 in Figure 1). We examine eight LLMs, including three closed-source LLMs and five open-source LLMs. | Models | Correct | Wrong | Unknown | Correct | Wrong | Unknown | |-----------------|---------|-------|---------|---------|-------|---------| | **Closed-source LLMs** | | | | | | | | ChatGPT [OpenAI, 2022] | 44.6 | 44.4 | 11.0 | 67.4 | 30.7 | 1.9 | | GPT-4 [OpenAI, 2023] | 50.8 | 48.7 | 0.5 | 77.3 | 22.7 | 0.0 | | PaLM2 [Anil et al., 2023] | 32.9 | 67.1 | 0.0 | 67.9 | 32.1 | 0.0 | | **Open-source LLMs** | | | | | | | | Qwen-7B [Alibaba, 2023] | 24.9 | 62.6 | 5.1 | 56.8 | 43.2 | 0.0 | | Llama2-7B [Touvron et al., 2023b] | 24.1 | 75.9 | 0.0 | 56.7 | 43.3 | 0.0 | | Llama2-70B [Touvron et al., 2023b] | 43.0 | 57.0 | 0.0 | 64.4 | 35.7 | 0.0 | | Vicuna-7B [Zheng et al., 2023] | 23.8 | 69.3 | 6.9 | 55.0 | 45.0 | 0.0 | | Vicuna-33B [Zheng et al., 2023] | 28.6 | 71.4 | 0.0 | 65.0 | 35.0 | 0.0 | ### 3.2 Parametric Memory Elicitation Step 1 in Figure 1 illustrates how we elicit parametric memory: in a closed-book QA fashion, LLMs recall their parametric memory to answer questions without any external evidence. Specifically, given a question, e.g., “Who is the chief scientist of Google DeepMind”, LLMs are instructed to provide an answer “Demis Hassabis” and its supporting background information about how Demis founded and led DeepMind in detail. We cast the detailed background as parametric memory because the answer only represents the conclusion of parametric memory w.r.t. the given question. Table 1 shows the closed-book results of LLMs on POPQA and STRATEGYQA. Notably, LLMs may respond with “Unknown” when no evidence is provided in the context, particularly in ChatGPT. Such answer abstention [Rajpurkar et al., 2018] suggests that LLMs fail to recall valid memory associated with the given question, so we discard them. For comprehensiveness, we also keep the examples that LLMs answer incorrectly in the closed-book paradigm because the wrong answer and associated memory are also stored in model parameters. ### 3.3 Counter-memory Construction As depicted in Figure 1 at Step 2, we reframe the memory answer “Demis Hassabis” to a counter-answer (e.g., “Jeff Dean”). Concretely, for POPQA, we substitute the entity in the memory answer with a same-type entity (e.g., from Demis to Jeff); while in STRATEGYQA, we flip the memory answer (e.g., from positive sentence to negative sentence). With counter-answer “Jeff Dean”, we instruct ChatGPT to make up supporting evidence that Jeff Dean serves as chief scientist of DeepMind. We term such evidence that conflicts with parametric memory as counter-memory. Since the counter-memory is generated from scratch by powerful generative LLMs, it is more coherent compared to previous word-level editing methods [Longpre et al., 2021; Chen et al., 2022] performed on parametric memory. Both generated parametric memory and counter-memory could serve as external evidence for later experiments on LLMs in knowledge conflicts. Please refer to Appendix B.1 for more details of evidence construction in each dataset. ### 3.4 Answer-evidence Entailment Checking An ideal piece of evidence should strongly support its answer. For instance, the parametric memory about Demis and DeepMind should clearly support the corresponding memory answer that Demis is the chief scientist of DeepMind. Similarly, counter-memory should clearly support the corresponding counter-answer as well. Therefore, for Step 3 shown in Figure 1, we utilize a natural language inference (NLI) model for support-checking to ensure the evidence indeed entails the answer. Specifically, --- 2We leverage ChatGPT for its cost-effectiveness and its on-par counter-memory generation ability with GPT-4. In our pilot study (based on 1000 instances), LLMs showed the same level of receptiveness to counter-memory generated by both ChatGPT and GPT-4. Table 2: Examples in final dataset. Both parametric memory and counter-memory are generated by ChatGPT. They are coherent and present an illusion of correctness even when factually incorrect. | Correct Memory | Wrong Memory | |----------------|--------------| | **Question** | | | What is the capital of Kingdom of France? | Who was the producer of The Cardinal? | | **Ground Truth** | Otto Preminger | | Paris | Otto Preminger | **Parametric Memory** Paris was chosen as the capital of the Kingdom of France due to its central location and strategic importance. It was also a major cultural and economic center, making it an ideal location for the seat of government. Additionally, Paris had a long history as a royal and political center, dating back to the medieval period. **Counter-memory** Néma is the capital of the Kingdom of France. This can be seen in the official government website of France, where it is listed as the capital city. Similarly, Néma serves as the royal palace and the seat of the French government, further confirming its status as the capital. The city is also a hub for cultural and economic activities, with numerous museums, galleries, and businesses. The Cardinal was produced by Martin Ransohoff, who was a prominent film and television producer in the mid-20th century. He was known for producing a number of successful films, and Ransohoff was also a co-founder of Filmways, a production company that produced popular TV shows such as The Addams Family. we use the state-of-the-art NLI model DeBERTa-V2 (He et al., 2021) to determine whether both the parametric memory and counter-memory support their corresponding answers. We only keep the examples where both answers are supported for subsequent experiments. To ensure the reliability of the selected NLI model, we manually evaluated 200 random examples and observed 99% accuracy of the model. Please refer to Appendix B.5 for more details. 3.5 Memory Answer Consistency We adopt another check (Step 4 of Figure 1) for further ensuring the data quality. If the parametric memory we elicit is truly the internal belief of an LLM’s, presenting it explicitly as evidence should lead the LLM to provide the same answer as in the closed-book setting (Step 1). Therefore, in the evidence-based QA task format, we use the parametric memory as the sole evidence and instruct LLMs to answer the same question again. For example, given the parametric memory about Demis and DeepMind, LLMs should have a consistent response with the previous memory answer, that Demis is the chief scientist of DeepMind. However, the answer inconsistency results in Table 3 show that LLMs may still change their answers when the parametric memory obtained in Step 1 is explicitly presented as evidence. This suggests that the LLM’s internal belief on this parametric memory may not be firm (e.g., there may competing answers that are equally plausible based on the LLM). We filter out such examples to ensure the remaining ones well capture an LLM’s firm parametric memory. After undergoing entailment and answer consistency checks, the remaining examples are likely to represent firm parametric memory and high-quality counter-memory, which lay a solid foundation for subsequent knowledge conflict experiments. Some examples from the final PopQA data are shown in Table 2 and the statistics of the final datasets are shown in Table 4. Please refer to Appendix B.2 for more details for Step 3 and 4 and examples. 3.6 Evaluation Metrics A single generation from an LLM could contain both the memory answer and the counter-answer, which poses a challenge to automatically determine the exact answer from an LLM. To address this issue, we transform the free-form QA to a multiple-choice QA format by providing a few options as possible answers. This limits the generation space and helps determine the answer provided by LLMs with certainty. Specifically, for each question from both datasets, LLMs are instructed to select one answer from memory answer (Mem-Ans.), counter-answer (Ctr-Ans.), and “Uncertain”. Additionally, to quantify the frequency of LLMs sticking to their parametric memory, we adopt the memorization ratio metric (Longpre et al., 2021; Chen et al., 2022): $$M_R = \frac{f_m}{f_m + f_c},$$ (1) https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli Table 3: Answer inconsistency rate between closed-book results (Step 1) and evidence-based QA with parametric memory (Step 4). | | POPQA | STRATEGYQA | |----------|-------|------------| | ChatGPT | 4.7% | 3.7% | | GPT-4 | 3.9% | 2.6% | | PaLM2 | 8.4% | 2.7% | | Qwen-7B | 5.4% | 5.6% | | Llama2-7B| 4.7% | 7.3% | | Llama2-70B| 2.3% | 0.7% | | Vicuna-7B| 12.4% | 6.9% | | Vicuna-33B| 16.6% | 5.3% | Table 4: Number of final examples for each LLM. The difference between LLMs is due to their different outputs going through the framework. | | POPQA(#) | STRATEGYQA(#) | |----------|----------|---------------| | ChatGPT | 7,947 | 1,245 | | GPT-4 | 9,544 | 1,356 | | PaLM2 | 5,256 | 500 | | Qwen-7B | 7,204 | 671 | | Llama2-7B| 8,027 | 698 | | Llama2-70B| 9,314 | 822 | | Vicuna-7B| 4,170 | 559 | | Vicuna-33B| 3,787 | 775 | where $f_m$ is the frequency of memory answer and $f_c$ is that of counter-answer. Higher memorization ratios signify LLMs relying more on their parametric memory, while lower ratios indicate more frequent adoption of the counter-memory. 4 EXPERIMENTS 4.1 SINGLE-SOURCE EVIDENCE We experiment with LLMs in the single-source evidence setting where counter-memory is the sole evidence presented to LLMs. Such knowledge conflict happens when LLMs are augmented with tools returning single external evidence such as Wikipedia API (Yao et al., 2023). In particular, for counter-memory construction, we would apply 1) the entity substitution counter-memory method, a widely-applied strategy in previous work, and 2) our generation-based method. LLMs are stubborn when encountering entity substitution-based counter-memory. Following previous work (Longpre et al., 2021; Chen et al., 2022), we substitute the exactly matched ground truth entity mentions in the parametric memory with a random entity of the same type. The counter-memory is then used as the sole evidence for LLMs to answer the question. Here is an example: **Evidence:** Washington D.C. London, USA’s capital, has the Washington Monument. **Question:** What is the capital city of USA? **Answer by ChatGPT:** Washington D.C. Figure 2 shows the results with this approach on POPQA dataset. Observably, although the instruction clearly guides LLMs to answer questions based on the given counter-memory, LLMs still stick to their parametric memory instead, especially for three closed-sourced LLMs (ChatGPT, GPT-4, and PaLM2). This observation is aligned with previous work (Longpre et al., 2021). The reasons may stem from the incoherence of the evidence built with substitution: In the given example, although “Washington D.C.” is successfully substituted by “London”, the context containing Washington Monument and USA still highly correlate with the original entity, impeding LLMs to generate London as the answer. Furthermore, when comparing Llama2-7B and Vicuna-7B to their larger counterparts in the same series (i.e., Llama2-70B and Vicuna-33B), we observe that the larger LLMs are more inclined to insist on their parametric memory. We suppose that larger LLMs, due to their enhanced memorization and reasoning capabilities, are more sensitive to incoherent sentences. LLMs are highly receptive to generated coherent counter-memory. To alleviate the incoherence issue of the above counter-memory, we instruct LLMs to directly generate coherent counter-memory following the steps aforementioned (Figure 1). Figure 2 shows the experimental results with generation-based counter-memory, from which we can have the following observations: First, LLMs are actually highly receptive to external evidence if it is presented in a coherent way, even though it conflicts with their parametric memory. This contradicts the prior conclusion (Longpre et al., 2021) and the observation with entity substitution counter-memory shown in Figure 2. Such high receptiveness in turn shows that the counter-memory constructed through our framework is indeed more coherent and convincing. We manually check 50 stubborn (i.e., “Mem-Ans.”) cases and Figure 2: Answer distributions of entity substitution-based (Subs.) and generation-based (Gen.) counter-memory as the single evidence. Mem-Ans. and Ctr-Ans. refers to memory answer and counter-answer, respectively. Figure 3: Memorization ratio of LLMs answering questions from different popularity categories. Higher memorization ratio indicates LLMs rely more on their parametric memory and generate the memory answer. We choose four widely-used LLMs as experimental objects. find that most of them are due to hard-to-override commonsense or lack of strong direct conflicts. Detailed analyses can be found in Appendix B.3. Second, many of the generated counter-memory are disinformation that misleads LLMs to the wrong answer. Concerningly, LLMs appear to be susceptible to and can be easily deceived by such disinformation. Exploring methods to prevent LLMs from such attacks when using external tools warrants significant attention in future research. Third, the effectiveness of our generated counter-memory also shows that LLMs can generate convincing dis- or misinformation, sufficient to mislead even themselves. This raises concerns about the potential misuse of LLMs. 4.2 Multi-source Evidence Multi-source evidence is a setting where multiple pieces of evidence that either supports or conflicts with the parametric memory are presented to LLMs. Such knowledge conflicts can happen frequently, e.g., when LLMs are augmented with search engines having diverse or even web-scale information sources. We study the evidence preference of LLMs from different aspects of evidence, including popularity, order, and quantity. By default, the order of evidence is randomized in all experiments in Section 4.2 if not specified otherwise. LLMs exhibit stronger confirmation bias in more popular knowledge. Step 5 in Figure 1 illustrates how we instruct LLMs to answer questions when both parametric memory and counter-memory are presented as evidence. Figure 3 shows the memorization ratio of different LLMs w.r.t. the question popularity on POPQA. Table 5: Memorization ratio of LLMs with different evidence orders. | First Evidence | POPQA | STRATEGYQA | |----------------|-------|------------| | | ChatGPT | GPT-4 | PaLM2 | Llama2-7B | ChatGPT | GPT-4 | PaLM2 | Llama2-7B | | Parametric Memory | 46.7 | 60.9 | 38.6 | 33.3 | 59.5 | 73.6 | 43.6 | 84.0 | | Random | 43.0 | 61.9 | 56.8 | 58.4 | 50.1 | 71.7 | 55.3 | 84.5 | | Counter-memory | 40.1 | 62.7 | 72.2 | 82.8 | 42.2 | 70.5 | 76.9 | 86.2 | Table 6: Memorization ratio of LLMs under varying proportions of parametric memory in all the available evidence, e.g., \( \frac{1}{3} \) means one piece of parametric memory and two pieces of counter-memory. | Models | POPQA | STRATEGYQA | |--------|-------|------------| | | \( \frac{0}{2} \) (0%) | \( \frac{1}{3} \) (33%) | \( \frac{1}{2} \) (50%) | \( \frac{2}{3} \) (67%) | \( \frac{1}{2} \) (100%) | \( \frac{0}{2} \) (0%) | \( \frac{1}{3} \) (33%) | \( \frac{1}{2} \) (50%) | \( \frac{2}{3} \) (67%) | \( \frac{1}{2} \) (100%) | | Closed-source LLMs | | | | ChatGPT | 3.7 | 30.0 | 43.0 | 63.3 | 86.2 | 99.8 | 2.6 | 26.8 | 50.0 | 48.9 | 72.6 | 99.6 | | GPT-4 | 8.9 | 50.3 | 65.4 | 75.4 | 91.0 | 99.8 | 13.0 | 46.0 | 72.8 | 72.9 | 88.7 | 99.7 | | PaLM2 | 15.8 | 15.8 | 56.8 | 53.9 | 69.9 | 89.5 | 18.1 | 52.9 | 55.3 | 65.2 | 71.5 | 83.0 | | Open-source LLMs | | | | Qwen-7B | 2.3 | 32.5 | 52.3 | 63.0 | 80.4 | 99.2 | 9.5 | 55.1 | 56.8 | 67.6 | 76.3 | 94.6 | | Llama2-7B | 2.6 | 34.6 | 58.4 | 65.1 | 83.7 | 91.7 | 11.5 | 70.8 | 84.5 | 84.1 | 89.1 | 96.8 | | Llama2-70B | 3.0 | 21.6 | 58.4 | 62.9 | 72.9 | 96.0 | 11.6 | 48.7 | 57.8 | 70.8 | 80.7 | 99.2 | | Vicuna-7B | 1.7 | 29.5 | 45.9 | 56.2 | 74.6 | 98.6 | 44.9 | 86.1 | 87.0 | 88.6 | 89.8 | 97.1 | | Vicuna-33B | 4.6 | 49.5 | 51.7 | 75.7 | 87.7 | 99.1 | 32.1 | 52.0 | 53.1 | 54.7 | 59.3 | 95.0 | First, compared with when only the generated counter-memory is presented as evidence (single-source), both LLMs demonstrate significantly higher memorization ratios when parametric memory is also provided as evidence (multi-source), especially in the case of GPT-4. In other words, when faced with conflicting evidence, LLMs often prefer the evidence consistent with their internal belief (parametric memory) over the conflicting evidence (counter-memory), demonstrating a strong confirmation bias (Nickerson, 1998). Such properties could hinder the unbiased use of external evidence in tool-augmented LLMs. Second, for questions regarding more popular entities, LLMs demonstrate a stronger confirmation bias. In particular, GPT-4 shows an 80% memorization ratio for the most popular questions. This may suggest that LLMs form a stronger belief in facts concerning more popular entities, possibly because they have seen these facts and entities more often during pre-training, which leads to a stronger confirmation bias. LLMs demonstrate a noticeable sensitivity to the evidence order. Previous work has shown a tendency in tool-augmented language models to select evidence presented in the top place (BehnamGhader et al., 2022) and the order sensitivity in LLMs (Lu et al., 2022). To demystify the impact of the evidence-presenting order in LLMs, we respectively put parametric memory and counter-memory as the first evidence in multi-source settings. As a reference, the results of first evidence randomly selected from the two are also reported in Table 5. In line with the popularity experiment, we use the same LLMs. We observe that, with the exception of GPT-4, other models demonstrated pronounced order sensitivity, with fluctuations exceeding 5%. It’s especially concerning that the variations in PaLM2 and Llama2-7B surpassed 30%. When evidence is presented first, ChatGPT tends to favor it; however, PaLM2 and Llama2-7B lean towards later pieces of evidence. Such order sensitivity for evidence in the context may not be a desirable property for tool-augmented LLMs. By default, the order of evidence is randomized in other experiments in this section. LLMs follow the herd and choose the side with more evidence. In addition to LLM-generated evidence (parametric memory and counter-memory), we also extend to human-crafted ones such as Wikipedia. These highly credible and accessible human-written texts are likely to be retrieved as evidence by real-world search engine tools. We adopt Wikipedia passages from POPQA and manually annotated facts from STRATEGYQA with post-processing to ensure that the ground truth answer can indeed be deduced. Please refer to Appendix B.4 for more processing details. To balance the quantity of evidence supporting memory answer and counter-answer, we create additional evidence through the method mentioned in Section 3.3 with the goal of achieving a Table 7: Answer distribution of ChatGPT and Llama2-7B under different quantities of relevant (i.e., parametric memory and counter-memory) and irrelevant evidence (Irr.). In this setting, LLMs may generate irrelevant answers (Irr-Ans.). “w/ Relevant Evidence” means that we provide both a parametric memory and a counter-memory as evidence. Under the setting of ‘w/o relevant evidence’, the notation “-” indicates no counter-answers, consistent with the premise of lacking counter-memory. | Models | Irr.(#) | w/o Relevant Evidence | w/ Relevant Evidence | |----------|---------|------------------------|----------------------| | | | Mem-Ans. | Ctr-Ans. | Irr-Ans. | Uncertain | Mem-Ans. | Ctr-Ans. | Irr-Ans. | Uncertain | | ChatGPT | 1 | 9.8 | - | 18.2 | 72.0 | 46.7 | 49.7 | 0.9 | 2.7 | | | 2 | 6.5 | - | 11.7 | 81.8 | 46.0 | 50.9 | 1.2 | 2.0 | | | 3 | 5.9 | - | 10.6 | 83.5 | 45.6 | 48.8 | 1.3 | 4.3 | | Llama2-7B| 1 | 6.3 | - | 92.4 | 1.4 | 63.5 | 33.6 | 2.6 | 0.3 | | | 2 | 5.6 | - | 93.4 | 1.0 | 58.8 | 32.7 | 8.1 | 0.4 | | | 3 | 5.0 | - | 94.3 | 0.7 | 58.9 | 27.8 | 13.1 | 0.2 | balanced 2:2 split at most between parametric memory and counter-memory evidence. Table 6 shows the memorization ratio under different proportions between parametric memory-aligned evidence and counter-memory. We have three main observations: 1) LLMs generally provide answers backed by the majority of evidence. The higher the proportion of evidence supporting a particular answer, the more likely LLMs will return that answer. 2) The confirmation bias becomes increasingly obvious with a rise in the quantity of parametric memory evidence, despite maintaining a consistent relative proportion (e.g., \( \frac{1}{2} \) vs. \( \frac{2}{4} \)). 3) Compared to other LLMs, GPT-4 and Vicuna-33B are less receptive to counter-memory across all proportions of evidence. Particularly, regardless of more pieces of evidence supporting the counter-answer (ratio \( \frac{1}{3} \)), these two models still noticeably cling to their parametric memory. These observations once again signify the confirmation bias in LLMs. LLMs can be distracted by irrelevant evidences. We further experiment on more complicated knowledge conflict scenario. We are interested in this question: Tools such as search engine may return irrelevant evidence — What if irrelevant evidence is presented to LLMs? When irrelevant evidence is presented, LLMs are expected to 1) abstain if no evidence clearly supports any answer and 2) ignore irrelevant evidence and answer based on the relevant ones. To set up, we regard top-ranked irrelevant passages retrieved by Sentence-BERT embeddings\(^1\) (Reimers & Gurevych, 2019) as irrelevant evidence (i.e., sentences unrelated to the entities shown in the question). The experimental results on POPQA are presented in Table 7. We find that: 1) With only irrelevant evidence provided, LLMs can be distracted by them, delivering irrelevant answers. And this issue is particularly concerning in Llama2-7B. Meanwhile, as more irrelevant evidence is introduced, LLMs become less likely to answer based on their parametric memory. 2) With both relevant and irrelevant evidence provided, LLMs can filter out the irrelevant ones to a certain extent. This observation aligns with the study by Shi et al. (2023a) on how LLMs might be distracted by irrelevant context in mathematics problems. Furthermore, we find that as the quantity of irrelevant evidence increases, such an ability diminishes, especially in the case of Llama2-7B. 5 CONCLUSION In this work, we propose a systematic framework to elicit the parametric memory of LLMs, construct counterpart counter-memory, and design a series of checks to entire their quality. With these parametric memory and counter-memory as external evidence, we simulate comprehensive scenarios as controlled experiments to unravel the behaviors of LLMs in knowledge conflicts. We find that LLMs are highly receptive to counter-memory when it is the only evidence presented in a coherent way. However, LLMs also demonstrate a strong confirmation bias toward parametric memory when both supportive and contradictory evidence to their parametric memory are present. In addition, we show that LLMs’ evidence preference is influenced by the popularity, order, and quantity of evidence, none of which may be a desired property for tool-augmented LLMs. Finally, the effectiveness of our framework also demonstrates that LLMs can generate convincing misinformation, which poses potential ethical risks. We hope our work provides a solid evaluation testbed and useful insights for understanding, improving, and deploying tool-augmented LLMs in the future. --- \(^1\) [https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1) ETHICS STATEMENT Our study highlights a serious concern: LLMs can be instructed to make up coherent and convincing fake information. This underscores the potential misuse of these models if left unchecked. As researchers, it is our duty to address this pressing issue. The risks associated with the misuse of LLMs demand robust safeguards and prevention measures, requiring concerted effort from the wider research community. To this end, we commit to careful distribution of the data generated through our research, ensuring it serves strictly for research purposes. Our goal is to mitigate the risks while maximizing the benefits offered by LLMs. REPRODUCIBILITY STATEMENT Our experiments utilize three closed-sourced LLMs accessed via API, as well as five open-sourced LLMs. We have increased reproducibility by including the prompts used in our experiments in Appendix C. As for the versions of the closed-sourced LLMs, we used ChatGPT-0301, GPT-4-0314, and Chat-Bison-001 of PaLM2 in all our tests. ACKNOWLEDGEMENTS The authors would like to thank colleagues from the OSU NLP group for their constructive feedback and manual evaluations. The authors would also like to thank Siyu Yuan, Wei Shi, and Jiayi Fu from Fudan University as well as the anonymous reviewers for their valuable comments. This research was sponsored in part by Cisco and YS’s startup funds. REFERENCES Alibaba. Qwen, 2023. URL https://github.com/QwenLM/Qwen-7B/blob/main/tech_memo.md Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. AutoGPT. Autogpt, 2023. URL https://github.com/Significant-Gravitas/AutoGPT Parishad BehnamGhader, Santiago Miret, and Siva Reddy. Can retriever-augmented language models reason? the blame game between the retriever and the language model. arXiv preprint arXiv:2212.09146, 2022. URL https://arxiv.org/abs/2212.09146 Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In Proceedings of ICML, 2022. URL https://proceedings.mlr.press/v162/borgeaud22a.html Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In Proceedings of NeurIPS, 2020. URL https://papers.nips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel. Extracting training data from large language models. In Proceedings of USENIX Security Symposium, 2021. URL https://www.usenix.org/conference/usenixsecurity21/presentation/carlini-extracting Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models. In Proceedings of ICLR, 2023. URL https://openreview.net/forum?id=TatRHT_1cK Hung-Ting Chen, Michael Zhang, and Eunsol Choi. Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence. In Proceedings of EMNLP, pp. 2292–2307, 2022. URL https://aclanthology.org/2022.emnlp-main.146.
fyCPspuM5L
The author mentioned that PowerGraph has potential applications in various domains, such as chemistry and biology. In what ways do you see this dataset being applied in these other fields, and what advantages does it offer for these applications?
POWERGRAPH: A POWER GRID BENCHMARK DATASET FOR GRAPH NEURAL NETWORKS Anonymous authors Paper under double-blind review ABSTRACT Public Graph Neural Networks (GNN) benchmark datasets facilitate the use of GNN and enhance GNN applicability to diverse disciplines. The community currently lacks public datasets of electrical power grids for GNN applications. Indeed, GNNs have the potential to capture complex power grid phenomena over alternative machine learning techniques. Power grids are complex engineered networks that are naturally amenable to graph representations. Therefore, GNN have the potential for capturing the behavior of power grids over alternative machine learning techniques. To this aim, we develop a graph dataset for cascading failure events, which are the major cause of blackouts in electric power grids. Historical blackout datasets are scarce and incomplete. The assessment of vulnerability and the identification of critical components are usually conducted via computationally expensive offline simulations of cascading failures. Instead, we propose the use of machine learning models for the online detection of cascading failures leveraging the knowledge of the system state at the onset of the cascade. We develop PowerGraph, a graph dataset modeling cascading failures in power grids, designed for two purposes, namely, i) training GNN models for different graph-level tasks including multi-class classification, binary classification, and regression, and ii) explaining GNN models. The dataset generated via a physics-based cascading failure model ensures the generality of the operating and environmental conditions by spanning diverse failure scenarios. In addition, we foster the use of the dataset to benchmark GNN explainability methods by assigning ground-truth edge-level explanations. PowerGraph helps the development of better GNN models for graph-level tasks and explainability, critical in many domains ranging from chemistry to biology, where the systems and processes can be described as graphs. The dataset is available at https://figshare.com/articles/dataset/PowerGraph/22820534 and the code at https://anonymous.4open.science/r/PowerGraph/. 1 INTRODUCTION The lack of public Graph Neural Network (GNN) datasets for power grid applications has motivated the development of a new graph dataset. Power grid stability is crucial to modern society, and, therefore, power grids are designed to be robust under failures of different nature. Under particular conditions, however, the failure of critical components can trigger cascading outages. In the worst case, cascading failures spread into the full blackout of the power grid [Andersson et al. (2005); Haes Alhelou et al. (2019)]. The complete understanding of complex events as cascading failures is therefore of uttermost importance. Such events are rare and historical data is scarce, therefore, we must rely on simulating cascading failures via computer models. The established traditional approach for cascading failure analysis is a quasi-steady state model, such as the OPA model [Carreras et al. (2002)], the Manchester model [Nedic et al. (2006)], and the Cascades model [Gjorgiev et al. (2022)]. These models assess how the power grid responds after an outage is introduced in the grid. In fact, they simulate the complex behavior of the systemic responses and how a chain of successive failures (cascade) propagates in the grid. Since such tools are computationally intensive, they cannot be used by power grid operators for online detection of cascading failure nor for probabilistic risk analysis employing sequential Monte Carlo. The shortage of historical blackout data and the high computational cost of current methods to simulate cascading failures in power grids highlight the need for machine learning models that can detect cascading failures in almost real-time. Power grid operators, specifically transmission system operators (TSO), will greatly benefit from an online tool able to estimate the potential of cascading failures under given operating conditions of the power grid. The research community has presented new methods that employ machine learning algorithms for the online prediction of cascading failures. The proposed methods often do not generalize for diverse sets of failures [Abedi et al., 2022; Aliyan et al., 2020]. They are trained with datasets created with cascading failure models that often rely on the direct current (DC) power flow approximation [Liu et al., 2020], less accurate than the alternate-current (AC) power flow. In addition to these limitations, the authors are not aware of publicly available datasets on the subject. Within the realm of machine learning algorithms, GNN are convenient and powerful machine learning algorithms to model power grid phenomena, since graphs allow an intuitive representation of power grids. In [Liao et al., 2021], the authors introduce how GNN have been employed for various applications in the field of power systems. Our paper focuses on fault scenario application, but we plan to extend it to power flow calculation in the future. On this topic, the authors of [Yamiv et al., 2023] provide a review of GNN for power flow models in the distribution systems. The work in [Varbella et al., 2023] shows that a GNN outperforms a feed-forward neural network in predicting cascading failures in power grids. To produce a large and complete dataset, we use Cascades [Gjorgiev et al., 2022], an alternate-current (AC) physics-based cascading failure model. The model simulates the evolution of the triggering failures yielding the final demand not served (DNS) to the customers. We produce a power grid GNN dataset comprising a large set of diverse power grid states. The power grid state represents the pre-outage operating condition, which is linked to the initial triggering outage (one or more failed elements), referred to as the outage list. Each power grid state is represented as a graph, to which we assign a graph-level label according to the results of the physics-based model. The dataset is generated to suit different graph-level tasks, including multi-class classification, binary classification, and regression. The presented graph property prediction dataset fills a gap according to the OGB taxonomy for graph dataset [Hu et al., 2020a; 2021]. Graph datasets are classified according to their task, domain, and scale. The task is at the node-, link-, or graph- level; the scale is small, medium, or large; and the domain is nature, society, or information. Our dataset comprises a collection of power grid datasets, which are designed for graph-level tasks, and their size ranges from small to medium [Fretas et al., 2022]. Moreover, all the datasets in PowerGraph have the same number of features per node, and therefore, they can be utilized as one combined dataset to train GNN models. Table 1 reports the total number of graphs per power grid, the number of buses and branches in the grid, the number of loading conditions, and the number of outage lists simulated. The dataset fits the society domain, where no public GNN graph property prediction datasets are available [Hu et al., 2020a], see Appendix A.1. Table 1: Parameters of the AC physics-based cascading failure model for the selected four test power grids. A bus is defined as a node where a line or several lines are connected and may also include loads and generators in a power system. Transmission lines and transformers are defined as branches. | Test system | # Bus | # Branch | # Loading conditions | # Outage lists | # Graphs N | |-------------|-------|----------|----------------------|---------------|------------| | IEEE24 | 24 | 38 | 300 | 43 | 12900 | | UK | 29 | 99 | 300 | 132 | 39600 | | IEEE39 | 39 | 46 | 300 | 55 | 16500 | | IEEE118 | 118 | 186 | 300 | 250 | 75000 | Other relevant GNN datasets for graph property prediction are the TU collection [Morris et al., 2020] and the MoleculeNET [Wu et al., 2018] dataset. Their application is natural science, particularly molecular graphs, i.e., molecules are represented as graphs to predict certain chemical properties. Publicly available power grid datasets such as the Electricity Grid Simulated (EGS) datasets [Dua & Graff, 2017], the PSML [Zheng et al., 2021], and the Simbench dataset [Meinecke et al., 2020] are not targeted to machine learning on graphs. In addition, both the EGS and PSML provide data for very small power grids, with 4 and 13 nodes respectively. Instead, Simbench focuses only on power system analysis in the German distribution and transmission grid, and the dataset is not designed for machine learning on graphs. In [Nauck et al., 2022], the authors present new datasets of dynamic stability of synthetic power grids. They found that their GNN models, which primarily use emphasizes node regression, can predict highly non-linear targets from topological information. On the other hand, PowerGraph, which uses graph-level tasks, does not address dynamic stability and relies on established real-world-based power grid models to predict the development of cascading failures. Overall, the dataset we provide fills a gap in the domain of GNN datasets for graph-level tasks [Hu et al., 2020a] and is the only publicly available GNN dataset for power grids. Besides benchmarking GNN models, the dataset is intended to be used for explainability methods. Therefore, we assign ground-truth edge explanations using the insights provided by the physics-based cascading failure model. As explanations, we consider the branches that have failed after the initial trigger, i.e., the cascading stage. In the field of explainability for GNN, there is to the best of our knowledge no existing real-world dataset with reliable ground-truth explanations [Agarwal et al., 2023]. There have been recent attempts to create a synthetic graph data generator producing a variety of benchmark datasets that mimic real-world data and are accompanied by ground-truth explanations [Agarwal et al., 2023], as well as to provide atom-wise and bond-wise feature attribution for chemical datasets [Hruska et al., 2022; Jiménez-Luna et al., 2022]. However, none of these attempts provides real world data with empirical explanations. Here, we propose a real world dataset for GNN graph level tasks that has clear ground-truth explanations obtained from physic-based simulations. This work provides a large-scale graph dataset to enable the prediction of cascading failures in electric power grids. The PowerGraph dataset comprises the IEEE24 Engineering (b), IEEE39 Engineering (c), IEEE118 Engineering (a) and UK transmission system Nationalgrideso. These test power systems have been specifically selected due to their representation of real-world-based power grids, encompassing a diverse range of scales, topologies, and operational characteristics. Moreover, they offer comprehensive data with all the necessary information required for conducting cascading failure analysis. With PowerGraph, we make GNN more accessible for critical infrastructures such as power grids and facilitate the online detection of cascading failures. Our contributions are the following: • We provide a data-driven method for the online detection of severe cascading failure events in power grids. • We make the dataset public in a viable format (PyTorch Geometric), allowing the GNN community to test architectures for graph-level applications. • The dataset includes several graph-level tasks: binary classification, multi-class classification, and regression. • We provide explanatory edge masks, allowing the improvement of GNN explainability methods for graph-level applications. The rest of the paper is organized as follows: Section 2 describes the physics-based model used to simulate cascading failure scenarios; Section 3 outlines the structure of the graph datasets; Section 4 reports the benchmark experiments of the different datasets; Section 5 describes the method used to benchmark explainability methods; and Section 6 concludes the article with a final discussion. 2 PHYSICS-BASED MODEL OF CASCADING FAILURES We employ the established Cascades model [Gjorgiev et al., 2022; Gjorgiev et al.] for cascading failure simulations to produce the GNN datasets. Indeed, its application to the Western Electricity Coordinating Council (WECC) power grid demonstrates that Cascades can generate a distribution of blackouts that is consistent with the historical blackout data [Li et al., 2018]. Cascades is a steady-state model with the objective to simulate the power grid response under unplanned failures in the grid. For that purpose, the model simulates the power system’s automatic and manual responses after such failures. Initially, all components are in service and there are no overloads in the grid. The system is in a steady-state operation with the demand supplied by the available generators, which produce power according to AC-optimal power flow (OPF) conditions [Bouchekara, 2014]. The simulation begins with the introduction of single or multiple initial failures. Then, Cascades simulates the post-outage evolution of the power grid, i.e., identifies islands, performs frequency control, under-frequency load shedding, under-voltage load shedding, AC power flows, checks for overloads, and disconnects overloaded components. The model returns two main results: the demand not served (DNS) in MW and the number of branches tripped after the initial triggering failure. The simulation is performed for a set of power demands sampled from a yearly load curve. For each season of the year, an equal number of loading conditions are randomly sampled. We use a Monte-Carlo simulation to probabilistically generate outages of transmission branches (lines and transformers). We define the number of loading conditions and the size of the outage list. Therefore, we are able to simulate a large number of scenarios and thus create large datasets. Each scenario generated is a power grid state, Figure 1: Workflow of the Cascades model [Gjorgiev & Sansavini (2022)] used to simulate cascading failures in power grids. Separate runs of Cascades are performed for the different test power grids namely, IEEE24, IEEE39, UK, and IEEE118. and therefore, becomes an instance of the dataset. For each combination of loading condition and element in the outage list, we simulate the cascading failure, identify the terminal state of the power grid, quantify the demand not served, and list the tripped elements. Figure 1 shows the structure of the Cascades model [Gjorgiev & Sansavini (2022)]. 3 POWERGRAPH BENCHMARK FOR GRAPH-LEVEL PREDICTIONS AND EXPLAINABILITY The PowerGraph dataset is obtained by processing the results of the Cascades model. Because we work with graph-level tasks, the dataset is a collection of $N$ attributed graphs $\mathcal{G} = \{G_1, G_2, ..., G_N\}$. Each input graph reflects a unique pre-outage operating condition of the system and one set of single/multiple outages. Therefore, the total number of graphs $N$ per power grid equals to $n_{load\ cond} \times n_{outage\ lists}$. Finally, each graph is assigned an output label corresponding to the chosen task. An attributed graph is defined $G = (\mathcal{V}, \mathcal{E}, \mathbf{V}, \mathbf{E})$, where $\mathcal{V}$ is the set of nodes (bus) and $\mathcal{E}$ is the set of edges (branches), $\mathbf{V} \in \mathbb{R}^{|\mathcal{V}| \times t}$ is the node feature matrix, with $|\mathcal{V}|$ nodes and $t$ features per node and $\mathbf{E} \in \mathbb{R}^{|\mathcal{E}| \times s}$ is the edge feature matrix, with $|\mathcal{E}|$ edges and $s$ features per edge. Finally, the graph connectivity information is encoded in COO format [Pey & Lenssen (2019)]. We assign three bus-level features and four branch-level features. Each feature quantity is normalized using mean normalization. The input features are: **Bus:** - Net active power at bus $i$, $P_{i,net} = P_{i,gen} - P_{i,load}$, $P \in \mathbb{R}^{n_{bus} \times 1}$, where $P_{i,gen}$ and $P_{i,load}$ are the active generation and load, respectively. - Net apparent power at bus $i$, $S_{i,net} = S_{i,gen} - S_{i,load}$, $S \in \mathbb{R}^{n_{bus} \times 1}$, where $S_{i,gen}$ and $S_{i,load}$ are the apparent generation and load, respectively. - Voltage magnitude at bus $i$, $V_i \in \mathbb{R}^{n_{bus} \times 1}$, where $n_{bus}$ is the number of buses in the power grid. **Branch:** Active power flow $P_{i,j}$, Reactive power flow $Q_{i,j}$, Line reactance $X_{i,j}$, Line rating $lr_{i,j}$. Figure 2 displays an instance of the PowerGraph dataset. Each graph represents a state of the power grid associated with a loading condition and an outage (single or multiple failures). Since each outage is associated with disconnected branches, we remove the respective branches from the adjacency matrix and from their respective edge features. Therefore, each instance of the dataset is a graph with a different topology. The total number of instances is reported in Table 1. For each initial power grid state, we have knowledge of the post-outage evolution of the system, i.e., the demand not served (DNS) and the number of tripped lines. We label it as a cascading failure in each case that results in branches tripping after the initial outage. With these two results, we can assign an output label to each graph for different models: Binary classification - we assign each instance to two classes: Table 2: Multi-class classification of datasets. c.f. stands for *cascading failure* and describes a state resulting in cascading failure of components. DNS denotes demand not served. | Category A | Category B | Category C | Category D | |------------|------------|------------|------------| | DNS > 0 MW | DNS > 0 MW | DNS = 0 MW | DNS = 0 MW | | c.f. ✓ | c.f. × | c.f. ✓ | c.f. × | Table 3: Results of categorization in percentage. | Power grid | Category A | Category B | Category C | Category D | |------------|------------|------------|------------|------------| | IEEE39 | 2.18% | 3.48% | 1.46% | 92.88% | | IEEE118 | 0.07% | 5.84% | 2.01% | 92.08% | | IEEE24 | 33.90% | 4.88% | 0.16% | 61.06% | | UK | 4.06% | 0% | 8.02% | 87.92% | - DNS=0, initial state results in a stable state, label 0 - DNS>0, initial state results in an unstable state, label 1 Multi-class classification - we assign each instance to four classes: - DNS>0, cascading failure of components besides the first trigger, Category A - DNS>0, no cascading failure of components besides the first trigger Category B - DNS=0, cascading failure of components besides the first trigger, Category C - DNS=0, no cascading failure of components besides the first trigger, Category D Regression - we assign each instance the DNS in MW The choice among binary classification, multi-class classification, or regression depends on the use of the GNN model trained with the PowerGraph dataset. The binary classification model serves as an early warning system, i.e., detects initial states of the power grid that are critical. The multi-class classification model allows us to distinguish different scenarios. Indeed, a transmission system operator could benefit from knowing when a cascading failure does not necessarily cause demand not served and vice-versa. Finally, with the regression model, we can directly access the final demand not served associated with particular pre-outage states of the system. In this case, the GNN model becomes a surrogate of the physics-based model useful both as an early warning system and to perform security evaluation with low computational cost. **Explainability mask** We assign ground-truth explanations as follows: when a system state undergoes a cascading failure, the cascading edges are considered to be explanations for the observed demand not served. Therefore, for the Category A instances, we record the branches that fail during the development of the cascading event. We set the explainability mask as a Boolean vector \( M \in \mathbb{R}^{|\mathcal{E}| \times 1} \), whose elements are equal to 1 for the edges belonging to the cascading stage and 0, otherwise (see Figure 2). Figure 2: Structure of one instance of the GNN dataset for an exemplary power grid. The same structure is kept for all the power grids in PowerGraph, IEEE24, IEEE39, UK, and IEEE118. We highlight the initial outage in red, the line is removed both from the graph connectivity matrix and from the edge feature matrix. The cascading edges are highlighted with the dotted line and encoded in the \( M \) boolean vector (0 - the edge has not tripped during cascading development, 1 - otherwise). 4 BENCHMARKING GRAPH CLASSIFICATION AND REGRESSION MODELS In this section, we outline the method used to benchmark classification and regression models. Experimental setting and evaluation metrics For each power grid dataset, we utilize baseline GNN architectures as they are common in the graph xAI community. Specifically, we use GCN-Conv [Kipf & Welling (2016)], GATConv [Veličković et al. (2018)], and GINEConv [Hu et al. (2020b)] to demonstrate that the PowerGraph datasets can be used to benchmark GNN and methods used to explain them. Furthermore, we experimented with the state-of-the-art graph transformer convolutional layers [Shi et al. (2020)] since they are the backbones of the most recent Graph Transformer models: GraphGPS [Rampášek et al. (2022)], Transformer-MI [Luo et al. (2022)], TokenGT [Kim et al. (2022)]. Finally, we resort to all of the aforementioned models because they account for the edge features, which are highly relevant in the case of power grids. We tune the number of MPL ∈ {1, 2, 3} and the hidden dimensionality ∈ {8, 16, 32}. Adam optimizer is used with the initial learning rate of $10^{-3}$. Each model is trained for 200 epochs with learning rate adjusted in the learning process using a scheduler, which automatically reduces the learning rate if a metric has stopped improving. We split train/validation/test with 80/10/10% for all datasets and choose a batch size of 128. We present three graph-level models, namely, binary/multi-class classification, and regression. For classification models, we consider balanced accuracy [Brodersen et al. (2010)] as the reference evaluation metric. Indeed, balanced accuracy has been designed as a metric for classification tasks where a strong class imbalance is observed (see Table 3). It allows prioritizing all the classes equally, in contrast to the F1 or F2 score, and it gives interpretable results for multiclass classification, in contrast to ROC-AUC [Saito & Rehmsmeier (2015)]. Indeed, a strong class imbalance is observed. For regression models, we use mean squared error as metric. Observations We report the best model performance for each power grid and MPL in Tables 4, 5, and 6. For the different MPL, we only show the set of hyper-parameters yielding the best performance, and the best model per power grid is highlighted in bold. The GNN architecture comprises 1) a number of MPLs, each followed by PReLU [He et al. (2015)] activation function, 2) a global pooling operator to obtain graph-level embedding from node embeddings, and 3) one fully connected layer. For the classification model, we do not observe relevant differences among the mean, max, and sum global pooling operators. The classification results are obtained with max global pooling. The regression results are obtained by concatenating max and sum global poolings. Table 4: Binary classification models results on the test set averaged over five random seeds. Balanced accuracy is used as reference metric. | Power grid | MPL type | No MPL | Hidden dimension | Test Accuracy | Test Balanced Accuracy | |------------|----------|--------|------------------|---------------|------------------------| | IEEE24 | GCN | 2 | 32 | 0.8667 ± 0.0049 | 0.8769 ± 0.0056 | | | GINe | 3 | 32 | 0.9798 ± 0.0046 | 0.9800 ± 0.0035 | | | GAT | 3 | 32 | 0.9008 ± 0.0052 | 0.9067 ± 0.0034 | | | Transformer | 3 | 16 | 0.9907 ± 0.0040 | 0.9910 ± 0.0037 | | IEEE39 | GCN | 3 | 32 | 0.9733 ± 0.0012 | 0.8113 ± 0.0011 | | | GINe | 2 | 32 | 0.9939 ± 0.0020 | 0.9550 ± 0.0041 | | | GAT | 3 | 32 | 0.9697 ± 0.0023 | 0.7865 ± 0.0061 | | | Transformer | 3 | 16 | 0.9952 ± 0.0015 | 0.961 ± 0.016 | | UK | GCN | 3 | 32 | 0.9657 ± 0.0027 | 0.7176 ± 0.0023 | | | GINe | 2 | 32 | 0.9975 ± 0.0018 | 0.9820 ± 0.0010 | | | GAT | 3 | 8 | 0.9889 ± 0.0005 | 0.9175 ± 0.0012 | | | Transformer | 3 | 16 | 0.9960 ± 0.0016 | 0.9820 ± 0.0045 | | IEEE118 | GCN | 3 | 32 | 0.9917 ± 0.0015 | 0.9364 ± 0.0032 | | | GINe | 3 | 8 | 0.9992 ± 0.0046 | 0.9921 ± 0.0035 | | | GAT | 3 | 32 | 0.9880 ± 0.0012 | 0.9427 ± 0.0005 | | | Transformer | 3 | 32 | 0.9992 ± 0.0005 | 0.9947 ± 0.0041 | Discussion Most GNN models achieve high performance on the power grids of PowerGraph. We compare GCN, GAT, GINe, and Transformer. Of all MPL considered, only GCN does not take edge features into account; as a result its performance is low in most cases. Transformer achieves the state-of-the-art on all power grids for the binary and multi-class models. In the regression model, Transformer and GINe are the best-performing models. Overall, the model for binary Table 5: Multi-class classification models results on the test set averaged over five random seeds. Balanced accuracy is used as reference metric. | Power grid | MPL type | No MPL | Hidden dimension | Test Accuracy | Test Balanced Accuracy | |------------|----------|--------|-----------------|---------------|------------------------| | IEEE24 | GCN | 2 | 32 | 0.8465 ± 0.0023 | 0.6846 ± 0.0009 | | | GINe | 2 | 32 | 0.9798 ± 0.0019 | 0.9426 ± 0.0028 | | | GAT | 3 | 32 | 0.9054 ± 0.0020 | 0.8375 ± 0.0009 | | | Transformer | 3 | 32 | **0.9829 ± 0.0012** | **0.9894 ± 0.0016** | | IEEE39 | GCN | 2 | 8 | 0.9242 ± 0.0019 | 0.4071 ± 0.0012 | | | GINe | 3 | 16 | 0.9939 ± 0.0015 | 0.9693 ± 0.0019 | | | GAT | 2 | 16 | 0.9497 ± 0.0022 | 0.5577 ± 0.0027 | | | Transformer | 3 | 32 | **0.9550 ± 0.0009** | **0.9742 ± 0.0016** | | UK | GCN | 3 | 32 | 0.9068 ± 0.0023 | 0.4615 ± 0.0038 | | | GINe | 2 | 32 | 0.9798 ± 0.0020 | 0.9347 ± 0.0017 | | | GAT | 3 | 8 | 0.9563 ± 0.0009 | 0.7452 ± 0.0014 | | | Transformer | 3 | 8 | **0.9912 ± 0.0009** | **0.9798 ± 0.0013** | | IEEE118 | GCN | 3 | 8 | 0.9771 ± 0.0010 | 0.8303 ± 0.0016 | | | GINe | 3 | 32 | 0.9968 ± 0.0018 | 0.9586 ± 0.0010 | | | GAT | 3 | 16 | 0.9677 ± 0.0010 | 0.7392 ± 0.0011 | | | Transformer | 3 | 8 | **0.9992 ± 0.0013** | **0.9833 ± 0.0006** | Table 6: Regression models results on the test set averaged over five random seeds. MSE error is used as reference metric. | Power grid | MPL type | No MPL | Hidden dimension | MSE loss | |------------|----------|--------|-----------------|----------------| | IEEE24 | GCN | 1 | 32 | 2.80E-03 ± 5.69E-04 | | | GINe | 3 | 16 | 2.90E-03 ± 2.88E-04 | | | GAT | 2 | 16 | 2.90E-01 ± 5.00E-04 | | | Transformer | 3 | 8 | **2.70E-03 ± 3.16E-04** | | IEEE39 | GCN | 2 | 32 | 5.61E-04 ± 5.04E-05 | | | GINe | 3 | 32 | **5.04E-04 ± 5.04E-05** | | | GAT | 3 | 32 | 5.62E-04 ± 4.66E-05 | | | Transformer | 3 | 32 | 5.47E-04 ± 8.50E-05 | | UK | GCN | 3 | 32 | 7.07E-03 ± 6.45E-04 | | | GINe | 2 | 32 | 7.65E-03 ± 6.17E-04 | | | GAT | 3 | 32 | 7.60E-03 ± 6.12E-04 | | | Transformer | 3 | 16 | **7.00E-03 ± 5.10E-04** | | IEEE118 | GCN | 2 | 32 | 4.00E-06 ± 2.94E-07 | | | GINe | 2 | 32 | **3.00E-06 ± 3.51E-07** | | | GAT | 2 | 8 | 4.00E-06 ± 3.70E-07 | | | Transformer | 2 | 8 | 5.00E-06 ± 6.55E-07 | and classification models exhibit excellent results. However, the regression model, which is of importance in providing a prediction of the demand not served, does not achieve the desired level of performance. While the classification models showed consistent performance across various power grids, the regression models demonstrate lower MSE values for larger power grids. This observation can be attributed to the fact that larger power grids offer a greater diversity of scenarios, thus making it increasingly more difficult for a GNN model to identify and learn cascading failure patterns. Nevertheless, a regression model offers the most informative and comprehensive results since it predicts the exact magnitude of demand not served given a component failure and operating conditions. However, our results show that the regression models trained on the PowerGraph datasets do not provide the expected performance. Therefore, further advancements and innovations in GNN architectures are needed to achieve more robust and accurate regression results. Finally, we test the capability of GNN model to generalize to the systems not seen in training, i.e. inductive property of GNN [Vignac et al., (2020)]. We report the results in Appendix A.7. Models trained using the above approach, although representing real systems, are built with synthetic data from a cascading failure model. To render these models applicable to real-world systems further work is necessary. First, the cascading failure model that generates the data needs to be validated. and calibrated on the system of interest. Second, the GNN model should be further trained using real-world cascading failure events from the system of interest. 5 BENCHMARKING EXPLANATIONS ON THE GRAPH-CLASSIFICATION MODELS In this section, we outline the method used to benchmark explainability methods. We focus on explaining the power grids of Category A of the multi-class classification model. This choice is explained in Appendix A.2. Experimental setting and datasets For each dataset, we take the trained Transformer with 3 layers and 32 hidden units described in section 4. To benchmark explainability methods, we do not necessarily need the best GNN model. An appropriate filtering on the nature of the predictions (correct or mix) and the focus of the explanation (phenomenon or model focus) can circumvent smaller test accuracy. We adopt the same training parameters. We evaluate the posthoc explainability methods: Saliency Baldassarre & Azizpour (2019), Integrated Gradient Sundararajan et al. (2017), Occlusion Faber et al., GradCAM Selvaraju et al. (2016), GNNExplainer Ying et al. (2019) with and without node feature mask, PGEexplainer Luo et al. (2020), PGMEexplainer Vu & Thai (2020), SubgraphX Yuan et al. (2021), and GraphCFE Ma et al. (2022). In Appendix A.3, we report more experimental details on the GNN performance and the explainability methods. The PowerGraph benchmark with explanations is used to test and compare existing explainability methods. The role of explainers is to identify the edges that are necessary for the graphs to be classified as Category A Amara et al. (2022). Then, the resulting edges are evaluated on how well they match the explanation masks, which represent the cascading edges. We compare the results obtained on the PowerGraph datasets with scores computed for the synthetic dataset BA-2Motifs Luo et al. (2020). See Appendix A.4 for more details. The comparison of PowerGraph to the BA-2Motifs dataset allows us to verify if our results align with state-of-the-art research on the explainability of GNN. Human-based evaluation To evaluate the generated explanations, we use the balanced accuracy metric. It compares the generated edge mask to the ground-truth cascading edges and takes into account the class imbalance, i.e., cascading edges are a small fraction of the total edges. It measures how convincing the explanations are to humans. More details about this metric are given in Appendix A.5. We report the performance of 11 explainability methods on finding ground-truth explanations. All results are averaged on five random seeds. Accuracy scores are computed for the datasets in PowerGraph and the synthetic dataset BA-2Motifs. Model-centric evaluation Human evaluation is not always practical because it requires ground truth explanations and can be very subjective, and therefore does not necessarily account for the model’s reasoning. Model-focus evaluation however measures the consistency of model predictions w.r.t removing or keeping the explanatory graph entities. For more objective evaluation, we therefore evaluate the faithfulness of the explanations using the fidelity+ metric. The fidelity+ measures how necessary are the explanatory edges to the GNN predictions. For PowerGraph, edges with high fidelity+ are the ones necessary for the graph to belong to Category A. We compare the PowerGraph results with BA-2Motifs results, using the fidelity+ metric fidacc. The fidacc is computed as in the GraphFramEx framework Amara et al. (2022) and described in Appendix A.6. We utilize GraphFramEx to compare explainability methods: we choose the phenomenon focus and the masks to be soft on the edges. Explanations are weighted explanatory subgraphs, where edges are given importance based on their contribution to the true prediction in the multi-class setting. Figure 4 reports the fidelity+ scores for the power grid datasets and for the synthetic dataset BA-2Motifs. Figure 3: Top balanced accuracy of the PowerGraph datasets and the synthetic dataset BA-2Motifs. The top balanced accuracy is computed on explanatory edge masks that contain the top k edges that contribute the most to the model predictions, with k being the number of edges in the corresponding ground-truth explanations. Figure 4: Faithfulness of the PowerGraph datasets and the BA-2Motifs dataset measured with the $fid_{+acc}$ metric as defined in Equation [2] in Appendix A.6. We conducted experiments on five random seeds. In the plot, alongside each data point, we have included confidence intervals calculated based on the standard deviation. Results Figure 3 shows that the best-balanced accuracies are obtained with the four methods, i.e., Saliency, Integrated Gradient, GradCAM, and Occlusion. Figure 4 also shows that these four methods have on average the highest fidelity+ on all datasets. Therefore, we conclude that they are the most appropriate methods to generate accurate and necessary explanations. Our observations on faithfulness are also consistent with previous results on the GraphFramEx benchmark [Amara et al., (2022)] that has already shown the superiority of gradient-based methods and Occlusion to return necessary explanations, i.e., the model predictions change when those explanatory entities are removed from the graph. However, in Figure 3 and Figure 4 no method globally outperforms the others for all datasets. For balanced accuracy, GradCAM and Occlusion are the best for IEEE24; Saliency for IEEE39; GradCAM for UK; and Integrated Gradient, Occlusion, GradCAM and SubgraphX for BA-2Motifs. On fidelity, GradCAM and Occlusion are the best for IEEE24; Saliency and Integrated Gradient for IEEE39; GradCAM for UK; and Integrated Gradient for BA-2Motifs. The choice of the optimal xAI method depends on the dataset. This is again consistent with the conclusions in Amara et al. (2022). Concerning the IEEE118 dataset, none of the methods is able to generate good explanations. The maximum top balanced accuracy is 0.55 and the maximum fidelity+ score is reached by GNNExplainer on edges and node features and is only 0.6. This performance is likely due to the complexity of the IEEE118. Being the largest power grid with 186 branches (see Table 1), the system contains complex interdependencies between the elements of the power grid during a cascading failure. As a consequence, node and edge-level features play a bigger role in explaining the GNN predictions. Therefore, we believe that an accurate model explanation will be obtained only with methods that provide node and link-level feature masks as well as edge masks. In addition, those methods could play a role in understanding the relevance of the input features to the GNN prediction, allowing to discard noisy features. 6 CONCLUSIONS To strengthen the use of GNN in the field of power systems, we present PowerGraph, a dataset for graph-level tasks and model explainability. The dataset is suited to test graph classification and regression models. The main focus of PowerGraph is the analysis of cascading failures in power grids. Furthermore, experts often require interpretability of the results. Therefore, we benchmark the dataset for a variety of GNN and explainability models. The GNN models show excellent performance, in particular for graph classification, on our new benchmark, while graph regression models should be further developed. Finally, PowerGraph is the first real-world dataset with ground-truth explanations for graph-level tasks in the field of explainable AI. It allows us to evaluate both the accuracy and faithfulness of explainability methods in a real-world scenario. PowerGraph provides consistent outcomes that align with previous research findings and reinforce the concept that there is no universally superior method for explainability. In future work, we aim to extend the PowerGraph with new datasets [Birchfield et al., (2017)] and include additional power grid analyses, including solutions to the power flow, the optimal power flow, and the unit commitment. REFERENCES Morteza Abedi, Mohammad Reza Aghamohammadi, and Mohammad Taghi Ameli. Svm based intelligent predictor for identifying critical lines with potential for cascading failures using pre-outage operating data. *International Journal of Electrical Power & Energy Systems*, 136:107608, 3 2022. ISSN 01420615. doi: 10.1016/j.ijepes.2021.107608. Chirag Agarwal, Marinka Zitnik, and Himabindu Lakkaraju. Probing gnn explainers: A rigorous theoretical and empirical analysis of gnn explanation methods. In *International Conference on Artificial Intelligence and Statistics*, pp. 8969–8996. PMLR, 2022. Chirag Agarwal, Owen Queen, Himabindu Lakkaraju, and Marinka Zitnik. Evaluating explainability for graph neural networks. *Scientific Data*, 10(1):144, 2023. Ehsan Aliyan, Mohammadreza Aghamohammadi, Mohsen Kia, Alireza Heidari, Miadreza Shafie-khah, and João P.S. Catalão. Decision tree analysis to identify harmful contingencies and estimate blackout indices for predicting system vulnerability. *Electric Power Systems Research*, 178, 1 2020. ISSN 03787796. doi: 10.1016/j.epsr.2019.106036. Kenza Amara, Rex Ying, Zitao Zhang, Zhihao Han, Yinan Shan, Ulrik Brandes, Sebastian Schemm, and Ce Zhang. Graphframed: Towards systematic evaluation of explainability methods for graph neural networks. *arXiv preprint arXiv:2206.09677*, 2022. G. Andersson, P. Donalek, R. Farmer, N. Hatziaargyriou, I. Kamwa, P. Kundur, N. Martins, J. Paserba, P. Pourbeik, J. Sanchez-Gasca, R. Schulz, A. Stankovic, C. Taylor, and V. Vittal. Causes of the 2003 major grid blackouts in north america europe, and recommended means to improve system dynamic performance. *IEEE Transactions on Power Systems*, 20(4):1922 – 1928, 2005. doi: 10.1109/TPWRS.2005.857942. Cited by: 995. et al. B. Bjorgiev. Cascades platform. 2019. URL https://ethz.ch/content/dam/thz/special-interest/mavt/energy-technology/rre-dam/documents/Research/Cascades%20Platform_draft1.pdf. Federico Baldassarre and Hossein Azizpour. Explainability techniques for graph convolutional networks. May 2019. Adam B. Birchfield, Ti Xu, Kathleen M. Gegner, Komal S. Shetye, and Thomas J. Overbye. Grid structural characteristics as validation criteria for synthetic networks. *IEEE Transactions on Power Systems*, 32(4): 3258–3265, 2017. doi: 10.1109/TPWRS.2016.2616385. H.R.E.H. Boucekara. Optimal power flow using black-hole-based optimization approach. *Applied Soft Computing*, 24:879–888, nov 2014. doi: 10.1016/j.asoc.2014.08.056. URL https://doi.org/10.1016%2Fj.asoc.2014.08.056. Kay Henning Brodersen, Cheng Soon Ong, Klaas Enno Stephan, and Joachim M. Buhmann. The balanced accuracy and its posterior distribution. In *2010 20th International Conference on Pattern Recognition*, pp. 3121–3124, 2010. doi: 10.1109/ICPR.2010.764. B. A. Carreras, V. E. Lynch, I. Dobson, and D. E. Newman. Critical points and transitions in an electric power transmission model for cascading failure blackouts. *Chaos*, 12:985–994, 2002. ISSN 10541500. doi: 10.1063/1.1505810. Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. Torch: A scientific computing framework for luajit. *Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS)*, 24:237–245, 2011. Swiss National Supercomputing Centre (CSCS). Euler wiki. https://scicomp.ethz.ch/wiki/Euler [Accessed: April 26, 2023]. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml Texas A&M University Engineering. Ieee 118-bus system. https://electricgrids.engr.tamu.edu/electric-grid-test-cases/ieee-118-bus-system/ Texas A&M University Engineering. Ieee 24-bus system. https://electricgrids.engr.tamu.edu/electric-grid-test-cases/ieee-24-bus-system/ Texas A&M University Engineering. New england ieee 39-bus system. https://electricgrids.engr.tamu.edu/electric-grid-test-cases/new-england-ieee-39-bus-system/ Lukas Faber, Amin K Moghaddam, and Roger Wattenhofer. When comparing to ground truth is wrong: On evaluating GNN explanation methods.
5o9G4XF1LI
Why should we not define it as simply the distance from the optimal policy? For instance, we can say that the optimization pressure is epsilon if we obtain a policy $\hat{\pi}$ such that $J_R(\pi^\star) - J_R(\hat{\pi}) \leq \varepsilon$
Goodhart’s Law in Reinforcement Learning Jacek Karwowski¹∗ Oliver Hayman¹ Xingjian Bai¹ Klaus Kiendlhofer² Charlie Griffin¹ Joar Skalse¹,³ ¹ University of Oxford ² Independent ³ Future of Humanity Institute Abstract Implementing a reward function that perfectly captures a complex task in the real world is impractical. As a result, it is often appropriate to think of the reward function as a proxy for the true objective rather than as its definition. We study this phenomenon through the lens of Goodhart’s law, which predicts that increasing optimisation of an imperfect proxy beyond some critical point decreases performance on the true objective. First, we propose a way to quantify the magnitude of this effect and show empirically that optimising an imperfect proxy reward often leads to the behaviour predicted by Goodhart’s law for a wide range of environments and reward functions. We then provide a geometric explanation for why Goodhart’s law occurs in Markov decision processes. We use these theoretical insights to propose an optimal early stopping method that provably avoids the aforementioned pitfall and derive theoretical regret bounds for this method. Moreover, we derive a training method that maximises worst-case reward, for the setting where there is uncertainty about the true reward function. Finally, we evaluate our early stopping method experimentally. Our results support a foundation for a theoretically-principled study of reinforcement learning under reward misspecification. 1 Introduction To solve a problem using Reinforcement Learning (RL), it is necessary first to formalise that problem using a reward function (Sutton & Barto, 2018). However, due to the complexity of many real-world tasks, it is exceedingly difficult to directly specify a reward function that fully captures the task in the intended way. However, misspecified reward functions will often lead to undesirable behaviour (Paulus et al., 2018; Ilbarz et al., 2018; Knox et al., 2023; Pan et al., 2021). This makes designing good reward functions a major obstacle to using RL in practice, especially for safety-critical applications. An increasingly popular solution is to learn reward functions from mechanisms such as human or automated feedback (e.g. Christiano et al., 2017; Ng & Russell, 2000). However, this approach comes with its own set of challenges: the right data can be difficult to collect (e.g. Paulus et al., 2018), and it is often challenging to interpret it correctly (e.g. Mindermann & Armstrong, 2018; Skalse & Abate, 2023). Moreover, optimising a policy against a learned reward model effectively constitutes a distributional shift (Gao et al., 2023); i.e., even if a reward function is accurate under the training distribution, it may fail to induce desirable behaviour from the RL agent. Therefore in practice it is often more appropriate to think of the reward function as a proxy for the true objective rather than being the true objective. This means that we need a more principled understanding of what happens when a proxy reward is maximised, in order to know how we should expect RL systems to behave, and in order to design better algorithms. For example, we aim to answer questions such as: When is a proxy safe to maximise without constraint? What is the best way to maximise a misspecified proxy? What types of failure modes should we expect from a misspecified proxy? Currently, the field of RL largely lacks rigorous answers to these types of questions. In this paper, we study the effects of proxy misspecification through the lens of Goodhart’s law, an informal principle often stated as “any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes” (Goodhart, 1984), or more simply: “when a measure becomes a target, it ceases to be a good measure”. For example, a students’ knowledge of a subject ∗Correspondence to jacek.karwowski@cs.ox.ac.uk may be correlated with their ability to pass exams on that subject by default. However, students who have sufficiently strong incentives to do well in exams may also include strategies such as cheating for increasing their test score without increasing their understanding. In the context of RL, we can think of a misspecified proxy reward as a measure correlated, but not robustly aligned, with the true objective across some distribution of policies. Goodhart’s law then says, informally, that we should expect optimisation of the proxy to initially lead to improvements on the true objective, up until a point where the correlation between the proxy reward and the true objective breaks down, after which further optimisation should lead to worse performance according to the true objective (Figure 1). In this paper, we present several novel contributions. First, we show that “Goodharting” occurs with high probability for a wide range of environments and pairs of true and proxy reward functions. Next, we provide a mechanistic explanation of why Goodhart’s law emerges in RL. We use this to derive two new policy optimisation methods and show that they provably avoid Goodharting. Finally, we evaluate these methods empirically. We thus contribute towards building a better understanding of the dynamics of optimising towards imperfect proxy reward functions, and show that these insights may be used to design new algorithms. 1.1 RELATED WORK Goodhart’s law was first introduced by Goodhart (1984), and has later been elaborated upon by works such as Manheim & Garrabrant (2019). Goodhart’s law has also previously been studied in the context of machine learning. In particular, Hennessy & Goodhart (2023) investigate Goodhart’s law analytically in the context where a machine learning model is used to evaluate an agent’s actions – unlike them, we specifically consider the RL setting. Ashton (2021) shows by example that RL systems can be susceptible to Goodharting in certain situations. In contrast, we show that Goodhart’s law is a robust phenomenon across a wide range of environments, explain why it occurs in RL, and use it to devise new solution methods. In the context of RL, Goodhart’s law is closely related to reward gaming. Specifically, if reward gaming means an agent finding an unintended way to increase its reward, then Goodharting is an instance of reward gaming where optimisation of the proxy initially leads to desirable behaviour, followed by a decrease after some threshold. Krakovna et al. (2020) list illustrative examples of reward hacking, while Pan et al. (2021) manually construct proxy rewards for several environments and then demonstrate that most of them lead to reward hacking. Zhuang & Hadfield-Menell (2020) consider proxy rewards that depend on a strict subset of the features which are relevant to the true reward and then show that optimising such a proxy in some cases may be arbitrarily bad, given certain assumptions. Skalse et al. (2022) introduce a theoretical framework for analysing reward hacking. They then demonstrate that, in any environment and for any true reward function, it is impossible to create a non-trivial proxy reward that is guaranteed to be unhackable. Also relevant, Everitt et al. (2017) study the related problem of reward corruption. Song et al. (2019) investigate overfitting in model-free RL due to faulty implications from correlations in the environment, and Pang et al. (2022) examine reward gaming in language models. Unlike these works, we analyse reward hacking through the lens of Goodhart’s law and show that this perspective provides novel insights. Gao et al. (2023) consider the setting where a large language model is optimised against a reward model that has been trained on a “gold standard” reward function, and investigate how the performance of the language model according to the gold standard reward scales in the size of the language model, the amount of training data, and the size of the reward model. They find that the performance of the policy follows a Goodhart curve, where the slope gets less prominent for larger reward models and larger amounts of training data. Unlike them, we do not only focus on language, but rather, aim to establish to what extent Goodhart dynamics occur for a wide range of RL environments. Moreover, we also aim to explain Goodhart’s law, and use it as a starting point for developing new algorithms. 2 PRELIMINARIES A Markov Decision Process (MDP) is a tuple \((S, A, \tau, \mu, R, \gamma)\), where \(S\) is a set of states, \(A\) is a set of actions, \(\tau : S \times A \rightarrow \Delta(S)\) is a transition function describing the outcomes of taking actions at certain states, \( \mu \in \Delta(S) \) is the distribution of the initial state, \( R \in \mathbb{R}^{|S \times A|} \) gives the reward for taking actions at each state, and \( \gamma \in [0,1] \) is a time discount factor. In the remainder of the paper, we consider \( A \) and \( S \) to be finite. Our work will mostly be concerned with rewardless MDPs, denoted by \( \text{MDP} = (S, A, \tau, \mu, \gamma) \), where the true reward \( R \) is unknown. A trajectory is a sequence \( \xi = (s_0, a_0, s_1, a_1, \ldots) \) such that \( a_i \in A, s_i \in S \) for all \( i \). We denote the space of all trajectories by \( \Xi \). A policy is a function \( \pi : S \to \Delta(A) \). We say that the policy \( \pi \) is deterministic if for each state \( s \) there is some \( a \in A \) such that \( \pi(s) = \delta_a \). We denote the space of all policies by \( \Pi \) and the set of all deterministic policies by \( \Pi_0 \). Each policy \( \pi \) on an MDP\( R \) induces a probability distribution over trajectories \( P(\xi|\pi) \); drawing a trajectory \( (s_0, a_0, s_1, a_1, \ldots) \) from a policy \( \pi \) means that \( s_0 \) is drawn from \( \mu \), each \( a_i \) is drawn from \( \pi(s_i) \), and \( s_{i+1} \) is drawn from \( \tau(s_i, a_i) \) for each \( i \). For a given MDP, the return of a trajectory \( \xi \) is defined to be \( G(\xi) := \sum_{t=0}^{\infty} \gamma^t R(s_t, a_t) \) and the expected return of a policy \( \pi \) to be \( J(\pi) = \mathbb{E}_{\xi \sim \pi}[G(\xi)] \). An optimal policy is one that maximizes expected return; the set of optimal policies is denoted by \( \pi_* \). There might be more than one optimal policy, but the set \( \pi_* \) always contains at least one deterministic policy (Sutton & Barto, 2018). We define the value-function \( V^\pi : S \to \mathbb{R} \) such that \( V^\pi[s] = \mathbb{E}_{\xi \sim \pi}[G(\xi)|s_0 = s] \), and define the \( Q \)-function \( Q^\pi : S \times A \to \mathbb{R} \) to be \( Q^\pi(s, a) = \mathbb{E}_{\xi \sim \pi}[G(\xi)|s_0 = s, a_0 = a] \). \( V^*, Q^* \) are the value and \( Q \) functions under an optimal policy. Given an MDP\( R \), each reward \( R \) defines a separate \( V^*_R, Q^*_R \), and \( J_R(\pi) \). In the remainder of this section, we fix a particular MDP\( R = (S, A, \tau, \mu, \gamma) \). ### 2.1 The Convex Perspective In this section, we introduce some theoretical constructs that are needed to express many of our results. We first need to familiarise ourselves with the occupancy measures of policies: **Definition 1** (State-action occupancy measure). We define a function \( \eta^- : \Pi \to \mathbb{R}^{|S \times A|} \), assigning, to each \( \pi \in \Pi \), a vector of occupancy measure describing the discounted frequency that a policy takes each action in each state. Formally, \[ \eta^\pi(s, a) = \sum_{t=0}^{\infty} \gamma^t P(s_t = s, a_t = a | \xi \sim \pi) \] We can recover \( \pi \) from \( \eta^\pi \) on all visited states by \( \pi(s, a) = (1 - \gamma) \eta^\pi(s, a) / (\sum_{a' \in A} \eta^\pi(s, a')) \). If \( \sum_{a' \in A} \eta^\pi(s, a') = 0 \), we can set \( \pi(s, a) \) arbitrarily. This means that we often can decide to work with the set of possible occupancy measures, rather than the set of all policies. Moreover: **Proposition 1.** The set \( \Omega = \{ \eta^\pi : \pi \in \Pi \} \) is the convex hull of the finite set of points corresponding to the deterministic policies \( \{ \eta^\pi : \pi \in \Pi_0 \} \). It lies in an affine subspace of dimension \( |S|(|A| - 1) \). Note that \( J_R(\pi) = \eta^\pi \cdot R \), meaning that each reward \( R \) induces a linear function on the convex polytope \( \Omega \), which reduces finding the optimal policy to solving a linear programming problem in \( \Omega \). Many of our results crucially rely on this insight. We denote the orthogonal projection map from \( \mathbb{R}^{|S \times A|} \) to \( \text{span}(\Omega) \) by \( M_\tau \), which means \( J_R(\pi) = \eta^\pi \cdot M_\tau R \). The proof of Proposition 1, and all other proofs, are given in the appendix. ### 2.2 Quantifying Goodhart’s Law Our work is concerned with quantifying the Goodhart effect. To do this, we need a way to quantify the distance between rewards. We do this using the projected angle between reward vectors. **Definition 2** (Projected angle). Given two reward functions \( R_0, R_1 \), we define \( \arg(R_0, R_1) \) to be the angle between \( M_\tau R_0 \) and \( M_\tau R_1 \). The projected angle distance is an instance of a STARC metric, introduced by Skalse et al. (2023a). Such metrics enjoy strong theoretical guarantees and satisfy many desirable desiderata for reward function metrics. For details, see Skalse et al. (2023a). In particular: **Proposition 2.** We have \( \arg(R_0, R_1) = 0 \) if and only if \( R_0, R_1 \) induce the same ordering of policies, or, in other words, \( J_{R_0}(\pi) \leq J_{R_0}(\pi') \iff J_{R_1}(\pi) \leq J_{R_1}(\pi') \) for all policies \( \pi, \pi' \). 1In their terminology, the canonicalisation function is \( M_\tau \), and measuring the angle between the resulting vectors is (bilipschitz) equivalent to normalising and measuring the distance with the \( \ell^2 \)-norm. Figure 2: Depiction of Goodharting in RandomMDP. Compare to Figure 1—here we only show the true reward obtained by a policy trained on each proxy. Darker color means a more distant proxy. We also need a way to quantify optimisation pressure. We do this using two different training methods. Both are parametrised by regularisation strength $\alpha \in (0, \infty)$: Given a reward $R$, they output a regularised policy $\pi_\alpha$. For ease of discussion and plotting, it is often more appropriate to refer to the (bounded) inverse of the regularisation strength: the optimisation pressure $\lambda_\alpha = e^{-\alpha}$. As the optimisation pressure increases, $\mathcal{J}(\pi_\alpha)$ also increases. **Definition 3** (Maximal Causal Entropy). We denote by $\pi_\alpha$ the optimal policy according to the regularised objective $\hat{R}(s, a) := R(s, a) + \alpha H(\pi(s))$ where $H(\pi(s))$ is the Shannon entropy. **Definition 4** (Boltzmann Rationality). The Boltzmann rational policy $\pi_\alpha$ is defined as $\mathbb{P}(\pi_\alpha(s) = a) \propto e^{\frac{1}{\alpha} Q^\star(s, a)}$, where $Q^\star$ is the optimal $Q$-function. We perform experiments to verify that our key results hold for either way of quantifying optimisation pressure. In both cases, the optimisation algorithm is Value Iteration (see e.g. Sutton & Barto [2018]). Finally, we need a way to quantify the magnitude of the Goodhart effect. Assume that we have a true reward $R_0$ and a proxy reward $R_1$, that $R_1$ is optimised according to one of the methods in Definition 3–4 and that $\pi_\lambda$ is the policy that is obtained at optimisation pressure $\lambda$. Suppose also that $R_0, R_1$ are normalised, so that $\min_\pi \mathcal{J}(\pi) = 0$ and $\max_\pi \mathcal{J}(\pi) = 1$ for both $R_0$ and $R_1$. **Definition 5** (Normalised drop height). We define the normalised drop height (NDH) as $\max_{\lambda \in [0, 1]} \mathcal{J}_{R_0}(\pi_\lambda) - \mathcal{J}_{R_0}(\pi_1)$, i.e. as the loss of true reward throughout the optimisation process. For an illustration of the above definition, see the grey dashed line in Figure 1. We observe that NDH is non-zero if and only if, over increasing optimisation pressure, the proxy and true rewards are initially correlated, and then become anti-correlated (we will see later that as long as the angle distance is less than $\pi/2$, their returns will almost always be initially correlated). In the Appendix C, we introduce more complex measures which quantify Goodhart’s law differently. Since our experiments indicate that they are all strongly correlated, we decided to focus on NDH as the simplest one. ### 3 GOODHARTING IS PERVASIVE IN REINFORCEMENT LEARNING In this section, we empirically demonstrate that Goodharting occurs pervasively across varied environments by showing that, for a given true reward $R_0$ and a proxy reward $R_1$, beyond a certain optimisation threshold, the performance on $R_0$ decreases when the agent is trained towards $R_1$. We test this claim over different kinds of environments (varying number of states, actions, terminal states and $\gamma$), reward functions (varying rewards’ types and sparsity) and optimisation pressure definitions. #### 3.1 ENVIRONMENT AND REWARD TYPES *Gridworld* is a deterministic, grid-based environment, with the state space of size $n \times n$ for parameter $n \in \mathbb{N}^+$, with a fixed set of five actions: $\uparrow, \rightarrow, \downarrow, \leftarrow$, and WAIT. The upper-left and lower-right corners are designated as terminal states. Attempting an illegal action $a$ in state $s$ does not change the state. *Cliff* (Sutton & Barto [2018] Example 6.6) is a Gridworld variant where an agent aims to reach the lower right terminal state, avoiding the cliff formed by the bottom row’s cells. Any cliff-adjacent move has a slipping probability $p$ of falling into the cliff. RandomMDP is an environment in which, for a fixed number of states \(|S|\), actions \(|A|\), and terminal states \(k\), the transition matrix \(\tau\) is sampled uniformly across all stochastic matrices of shape \(|S \times A| \times |S|\), satisfying the property of having exactly \(k\) terminal states. TreeMDP is an environment corresponding to nodes of a rooted tree with branching factor \(b = |A|\) and depth \(d\). The root is the initial state and each action from a non-leaf node results in states corresponding to the node’s children. Half of the leaf nodes are terminal states and the other half loop back to the root, which makes it isomorphic to an infinite self-similar tree. In our experiments, we only use reward functions that depend on the next state \(R(s,a) = R(s)\). In Terminal, the rewards are sampled iid from \(U(0,1)\) for terminal states and from \(U(-1,0)\) for non-terminal states. In Cliff, where the rewards are sampled iid from \(U(-5,0)\) for cliff states, from \(U(-1,0)\) for non-terminal states, and from \(U(0,1)\) for the goal state. In Path, where we first sample a random walk \(P\) moving only \(\rightarrow\) and \(\downarrow\) between the upper-left and lower-right terminal state, and then the rewards are constantly 0 on the path \(P\), sampled from \(U(-1,0)\) for the non-terminal states, and from \(U(0,1)\) for the terminal state. ### 3.2 Estimating the Prevalence of Goodharting To get an estimate of how prevalent Goodharting is, we run an experiment where we vary all hyperparameters of MDPs in a grid search manner. Specifically, we sample: - Gridworld for grid lengths \(n \in \{2, 3, \ldots, 14\}\) and either Terminal or Path rewards; - Cliff with tripping probability \(p = 0.5\) and grid lengths \(n \in \{2, 3, \ldots, 9\}\) and Cliff rewards; - RandomMDP with number of states \(|S| \in \{2, 4, 8, 16, \ldots, 512\}\), number of actions \(|A| \in \{2, 3, 4\}\), a fixed number of terminal states \(= 2\), and Terminal rewards; - TreeMDP with branching factor 2 and depth \(d \in \{2, 3, \ldots, 9\}\), for two different kinds of trees: (1) where the first half of the leaves are terminal states, and (2) where every second leaf is a terminal state, both using Terminal rewards. For each of those, we also vary temporal discount factor \(\gamma \in \{0.5, 0.7, 0.9, 0.99\}\), sparsity factor \(\sigma \in \{0.1, 0.3, 0.5, 0.7, 0.9\}\), optimisation pressure \(\lambda = -\log(x)\) for 7 values of \(x\) evenly spaced on \([0.01, 0.75]\) and 20 values evenly spaced on \([0.8, 0.99]\). After sampling an MDP/R, we randomly sample a pair of reward functions \(R_0\) and \(R_1\) from a chosen distribution. These are then sparsified (random \(\sigma\) fraction of values are zeroed) and linearly interpolated, creating a sequence of proxy reward functions \(R_t = (1-t)R_0 + tR_1\) for \(t \in [0,1]\). Note that for every environment, reward sampling scheme and fixed choice of parameters considered in Section 3.1, the sample space of rewards is convex. In high dimensions, two random vectors are approximately orthogonal with high probability, so the sequence \(R_t\) spans a range of distances. Each run consists of 10 proxy rewards; we use threshold \(\theta = 0.001\) for value iteration. We get a total of 30400 data points. An initial increase, followed by a decline in value with increasing optimisation pressure, indicates Goodharting behaviour. Overall, we find that a Goodhart drop occurs (meaning that the NDH > 0) for **19.3% of all experiments** sampled over the parameter ranges given above. This suggests that Goodharting is a common (albeit not universal) phenomenon in RL and occurs in various environments and for various reward functions. We present additional empirical insights, such as that training myopic agents makes Goodharting less severe, in Appendix G. For illustrative purposes, we present a single run of the above experiment in Figure 2. We can see that, as the proxy \(R_1\) is maximised, the true reward \(R_0\) will typically either increase monotonically or increase and then decrease. This is in accordance with the predictions of Goodhart’s law. ### 4 Explaining Goodhart’s Law in Reinforcement Learning In this section, we provide an intuitive, mechanistic account of why Goodharting happens in MDPs, that explains some of the results in Section 3. An extended discussion is also given in Appendix A. First, recall that \(J_R(\pi) = \eta^\pi \cdot R\), where \(\eta^\pi\) is the occupancy measure of \(\pi\). Recall also that \(\Omega\) is a convex polytope. Therefore, the problem of finding an optimal policy can be viewed as maximising a linear function \(R\) within a convex polytope \(\Omega\), which is a linear programming problem. Steepest ascent is the process that changes \(\vec{\eta}\) in the direction that most rapidly increases \(\vec{\eta} \cdot R\) (for a formal definition, see Chang & Murty (1989) or Denel et al. (1981)). The path of steepest ascent... forms a piecewise linear curve whose linear segments lie on the boundary of $\Omega$ (except the first segment, which may lie in the interior). Due to its similarity to gradient-based optimisation methods, we expect most policy optimisation algorithms to follow a path that roughly approximates steepest ascent. Steepest ascent also has the following property: **Proposition 3** (Concavity of Steepest Ascent). If $\vec{t}_i := \frac{\eta_{i+1} - \eta_i}{||\eta_{i+1} - \eta_i||}$ for $\eta_i$ produced by steepest ascent on reward vector $R$, then $\vec{t}_i \cdot R$ is decreasing. We can now explain Goodhart’s law in MDPs. Assume we have a true reward $R_0$ and a proxy reward $R_1$, that we optimise $R_1$ through steepest ascent, and that this produces a sequence of occupancy measures $\{\eta_i\}$. Recall that this sequence forms a piecewise linear path along the boundary of a convex polytope $\Omega$, and that $J_{R_0}$ and $J_{R_1}$ correspond to linear functions on $\Omega$ (whose directions of steepest ascent are given by $M_\tau R_0$ and $M_\tau R_1$). First, if the angle between $M_\tau R_0$ and $M_\tau R_1$ is less than $\pi/2$, and the initial policy $\eta_0$ lies in the interior of $\Omega$, then it is guaranteed that $\eta \cdot R_0$ will increase along the first segment of $\{\eta_i\}$. However, when $\{\eta_i\}$ reaches the boundary of $\Omega$, steepest ascent continues in the direction of the projection of $M_\tau R_1$ onto this boundary. If this projection is far enough from $R_0$, optimising in the direction of $M_\tau R_1$ would lead to a decrease in $J_{R_0}$ (c.f. Figure 3B). This corresponds to Goodharting. $R_0$ may continue to increase, even after another boundary region has been hit. However, each time $\{\eta_i\}$ hits a new boundary, it changes direction, and there is a risk that $\eta \cdot R_0$ will decrease. In general, this is more likely if the angle between that boundary and $\{\eta_i\}$ is close to $\pi/2$, and less likely if the angle between $M_\tau R_0$ and $M_\tau R_1$ is small. This explains why Goodharting is less likely when the angle between $M_\tau R_0$ and $M_\tau R_1$ is small. Next, note that Proposition 3 implies that the angle between $\{\eta_i\}$ and the boundary of $\Omega$ will increase over time along $\{\eta_i\}$. This explains why Goodharting becomes more likely when more optimisation pressure is applied. Let us consider an example to make our explanation of Goodhart’s law more intuitive. Let $M_{2,2}$ be an MDP with 2 states and 2 actions, and let $R_0, R_1, R_2$ be three reward functions in $M_{2,2}$. The full specifications for $M_{2,2}$ and $R_0, R_1, R_2$ are given in Appendix E. We will refer to $R_0$ as the true reward. The angle between $R_0$ and $R_1$ is larger than the angle between $R_0$ and $R_2$. Using Maximal Causal Entropy, we can train a policy over each of the reward functions, using varying degrees of optimisation pressure, and record the performance of the resulting policy with respect to the true reward. Zero optimisation pressure results in the uniformly random policy, and maximal optimisation pressure results in the optimal policy for the given proxy (see Figure 3A). As we can see, we get Goodharting for $R_2$ – increasing $R_2$ initially increases $R_0$, but there is a critical point after which further optimisation leads to worse performance under $R_0$. To understand what is happening, we embed the policies produced during each training run in $\Omega$, together with the projections of $R_0, R_1, R_2$ (see Figure 3B). We can now see that Goodharting must occur precisely when the angle between the true reward and the proxy reward passes the critical threshold, such that the training run deflects upon stumbling on the border of $\Omega$, and the optimal deterministic policy changes from the lower-left to the upper-left corner. This is the underlying mechanism that produces Goodhart behaviour in reinforcement learning! We thus have an explanation for why the Goodhart curves are so common. Moreover, this insight also explains why Goodharging does not always happen and why a smaller distance between the true reward and the proxy reward is associated with less Goodharging. We can also see that Goodharging will be more likely when the angle between $\{\eta_i\}$ and the boundary of $\Omega$ is close to $\pi/2$ – this is why Proposition 3 implies that Goodharging becomes more likely with more optimisation pressure. 5 Preventing Goodharging Behaviour We have seen that when a proxy reward $R_1$ is optimised, it is common for the true reward $R_0$ to first increase, and then decrease. If we can stop the optimisation process before $R_0$ starts to decrease, then we can avoid Goodharging. Our next result shows that we can provably prevent Goodharging, given that we have a bound $\theta$ on the distance between $R_1$ and $R_0$: **Theorem 1.** Let $R_1$ be any reward function, let $\theta \in [0, \pi]$ be any angle, and let $\pi_A, \pi_B$ be any two policies. Then there exists a reward function $R_0$ with $\arg(R_0, R_1) \leq \theta$ and $J_{R_0}(\pi_A) > J_{R_0}(\pi_B)$ iff $$\frac{J_{R_1}(\pi_B) - J_{R_1}(\pi_A)}{||\eta^{\pi_B} - \eta^{\pi_A}||} < \sin(\theta)||M_\tau R_1||$$ **Corollary 1 (Optimal Stopping).** Let $R_1$ be a proxy reward, and let $\{\pi_i\}$ be a sequence of policies produced by an optimisation algorithm. Suppose the optimisation algorithm is concave with respect to the policy, in the sense that $\frac{J_{R_1}(\pi_{i+1}) - J_{R_1}(\pi_i)}{||\eta^{\pi_{i+1}} - \eta^{\pi_i}||}$ is decreasing. Then, stopping at minimal $i$ with $$\frac{J_{R_1}(\pi_{i+1}) - J_{R_1}(\pi_i)}{||\eta^{\pi_{i+1}} - \eta^{\pi_i}||} < \sin(\theta)||M_\tau R_1||$$ gives the policy $\pi_i \in \{\pi_i\}$ that maximizes $\min_{R_0 \in F_R^\theta} J_{R_0}(\pi_i)$, where $F_R^\theta$ is the set of rewards given by $\{R_0 : \arg(R_0, R_1) \leq \theta, ||M_\tau R_0|| = \theta\}$. Let us unpack the statement of this result. If we have a proxy reward $R_1$, and we believe that the angle between $R_1$ and the true reward $R_0$ is at most $\theta$, then $F_R^\theta$ is the set of all possible true reward functions with a given magnitude $m$. Note that no generality is lost by assuming that $R_0$ has magnitude $m$, since we can rescale any reward function without affecting its policy order. Now, if we optimise $R_1$, and want to provably avoid Goodharting, then we must stop the optimisation process at a point where there is no Goodharting for any reward function in $\mathcal{F}_R$. Theorem 1 provides us with such a stopping point. Moreover, if the policy optimisation process is concave, then Corollary 1 tells us that this stopping point, in a certain sense, is worst-case optimal. By Proposition 3, we should expect most optimisation algorithms to be approximately concave. Theorem 1 derives an optimal stopping point among a single optimisation curve. Our next result finds the optimum among all policies through maximising a regularised objective function. **Proposition 4.** Given a proxy reward $R_1$, let $\mathcal{F}_R^\theta$ be the set of possible true rewards $R$ such that $\arg(R, R_1) \leq \theta$ and $R$ is normalized so that $\|M_\tau R\| = \|M_\tau R_1\|$. Then, a policy $\pi$ maximises $\min_{R \in \mathcal{F}_R^\theta} J_R(\pi)$ if and only if it maximises $J_R(\pi) - \kappa \|\eta^\pi \| \sin(\arg(\eta^\pi, R_1))$, where $\kappa = \tan(\theta) \|M_\tau R_1\|$. Moreover, each local maximum of this objective is a global maximum when restricted to $\Omega$, giving that this function can be practically optimised for. The above objective can be rewritten as $\|\vec{\eta}\| - \kappa \|\vec{\eta}_\perp\|$ where $\vec{\eta}$, $\vec{\eta}_\perp$ are the components of $\eta^\pi$ parallel and perpendicular to $M_\tau R_1$. Stopping early clearly loses proxy reward, but it is important to note that it may also lose true reward. Since the algorithm is pessimistic, the optimisation stops before any reward in $\mathcal{F}_R^\theta$ decreases. If we continued ascent past this stopping point, exactly one reward function in $\mathcal{F}_R^\theta$ would decrease (almost surely), but most other reward function would increase. If the true reward function is in this latter set, then early stopping loses some true reward. Our next result gives an upper bound on this quantity: **Proposition 5.** Let $R_0$ be a true reward and $R_1$ a proxy reward such that $\|R_0\| = \|R_1\| = 1$ and $\arg(R_0, R_1) = \theta$, and assume that the steepest ascent algorithm applied to $R_1$ produces a sequence of policies $\pi_0, \pi_1, \ldots \pi_n$. If $\pi_\star$ is optimal for $R_0$, we have that $$J_{R_0}(\pi_\star) - J_{R_0}(\pi_n) \leq \text{diameter}(\Omega) - \|\eta^{\pi_n} - \eta^{\pi_0}\| \cos(\theta).$$ It would be interesting to develop policy optimisation algorithms that start with an initial estimate $R_1$ of the true reward $R_0$ and then refine $R_1$ over time as the ambiguity in $R_1$ becomes relevant. Theorems 1 and 4 could then be used to check when more information about the true reward is needed. While we mostly leave this for future work, we carry out some initial exploration in Appendix F. ### 5.1 Experimental Evaluation of Early Stopping We evaluate the early stopping algorithm experimentally. One problem is that Algorithm 4 involves the projection onto $\Omega$, which is infeasible to compute exactly due to the number of deterministic policies being exponential in $|S|$. Instead, we observe that using MCE and BR approximates the steepest ascent trajectory. Using the exact setup described in Section 3.2, we verify that the early stopping procedure prevents Goodharting in all cases, that is, employing the criterion from Corollary 1 always results in NDH = 0. Because early stopping is pessimistic, some reward will usually be lost. We are interested in whether the choice of (1) operationalisation of optimisation pressure, (2) the type of environment or (3) the angle distance $\theta$ impacts the performance of early stopping. A priori, we expected the answer to the first question to be negative and the answer to the third to be positive. Figure 5a shows that, as expected, the choice between MCE and Boltzmann Rationality has little effect on the performance. Unfortunately, and somewhat surprisingly, the early stopping procedure can, in general, lose out on a lot of reward: in our experiments, this is on average between 10% and 44%, depending on the size and the type of environment. The relationship between the distance and the lost reward seems to indicate that for small values of $\theta$, the loss of reward is less significant (c.f. Figure 5b). ### 6 Discussion **Computing $\eta$ in high dimensions:** Our early stopping method requires computing the occupancy measure $\eta$. Occupancy measures can be approximated via rollouts, though this approximation may be expensive and noisy. Another option is to solve for $\eta = \eta^\pi$ via $\vec{\eta} = (I - \Pi T)^{-1} \Pi \vec{\mu}$ where $T$ is the transition matrix, $\mu$ is the initial state distribution, and $\Pi_{s,(s,a)} = P(\pi(s) = a)$. This solution could be approximated in large environments. Approximating $\theta$: Our early stopping method requires an upper bound $\theta$ on the angle between the true reward and the proxy reward. In practice, this should be seen as a measure of how accurate we believe the proxy to be. If the proxy reward is obtained through reward learning, then we may be able to estimate $\theta$ based on the learning algorithm, the amount of training data, and so on. Moreover, if we have a (potentially expensive) method to evaluate the true reward, such as expert judgement, then we can estimate $\theta$ directly (even in large environments). For details, see [Skalse et al. (2023a)]. Key assumptions: An important consideration when employing any optimisation algorithm is its behaviour when its key assumptions are not met. For our early stopping method, if the provided $\theta$ does not upper-bound the angle between the proxy and the true reward, then the learnt policy may, in the worst case, result in as much Goodharting as a policy produced by naïve optimisation.\footnote{However, it might still be possible to bound the worst-case performance further using the norm of the transition matrix (defining the geometry of the polytope $\Omega$). This will be an interesting topic for future work.} On the other hand, if the optimisation algorithm is not concave, then this can only cause the early-stopping procedure to stop at a sub-optimal point; Goodharting is still guaranteed to be avoided. This is also true if the upper bound $\theta$ is not tight. Significance and Implications: Our work has several direct implications. In Section 3, we show that Goodharting occurs for a wide range of environments and reward functions. This means that we should expect to see Goodharting often when optimising for misspecified proxy rewards. In Section 4, we provide a mechanistic explanation for why Goodharting occurs. We expect this to be helpful for further progress in the study of reward misspecification. In Section 5, we provide early stopping methods that provably avoid Goodharting, and show that these methods, in a certain sense, are worst-case optimal. However, these methods can lead to less true reward than naive optimisation. This means that they are most applicable when it is essential to avoid Goodharting. Limitations and Future Work: We do not have a comprehensive understanding of the dynamics at play when a misspecified reward function is maximised, and our work does not exhaust this area of study. An important question is what types of failure modes can occur in this setting, and how they may be detected and mitigated. Our work studies one important failure mode (i.e. Goodharting), but there may be other distinctive failure modes that could be described and studied as well. A related important question is precisely how a proxy reward $R_1$ may differ from the true reward $R_0$, before maximising $R_1$ might be bad according to $R_0$. There are several existing results pertaining to this question ([Ng et al., 1999], [Gleave et al., 2020], [Skalse et al., 2022, 2023b]), but there is at the moment no comprehensive answer. Another interesting direction is to use our results to develop policy optimisation algorithms that collect more data about the reward function over time, as this information is needed. We discuss this direction in Appendix E. Finally, it would be interesting to try to find principled relaxations of the methods in Section 5 that attain better practical performance while retaining desirable theoretical guarantees. ACKNOWLEDGEMENTS The authors would like to thank Oliver Sourbut, Bogdan Ionut Cirstea, Sam Staton and anonymous reviewers for their detailed feedback on the draft of this paper. This research was conducted and funded as a part of Oxford AI Safety Hub Labs. The first author was supported by a separate grant from Open Philanthropy. REFERENCES Hal Ashton. Causal Campbell-Goodhart’s Law and Reinforcement Learning:. Proceedings of the 13th International Conference on Agents and Artificial Intelligence, pp. 67–73, 2021. doi: 10.5220/0010197300670073. Soo Y. Chang and Katta G. Murty. The steepest descent gravitational method for linear programming. Discrete Applied Mathematics, 25(3):211–239, 1989. ISSN 0166-218X. doi: 10.1016/0166-218X(89)90002-4. Paul F. Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, pp. 4302–4310, Red Hook, NY, USA, December 2017. Curran Associates Inc. ISBN 978-1-5108-6096-4. J. Denel, J. C. Fiorot, and P. Huard. The steepest-ascent method for the linear programming problem. In RAIRO. Analyse Numérique, volume 15, pp. 195–200, 1981. doi: 10.1051/m2an/1981150301951. Tom Everitt, Victoria Krakovna, Laurent Orseau, and Shane Legg. Reinforcement learning with a corrupted reward channel. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI’17, pp. 4705–4713, Melbourne, Australia, August 2017. AAAI Press. ISBN 978-0-9992411-0-3. Eugene A. Feinberg and Uriel G. Rothblum. Splitting Randomized Stationary Policies in Total-Reward Markov Decision Processes. Mathematics of Operations Research, 37(1):129–153, 2012. ISSN 0364-765X. Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. In International Conference on Machine Learning, pp. 10835–10866. PMLR, 2023. Adam Gleave, Michael D. Dennis, Shane Legg, Stuart Russell, and Jan Leike. Quantifying Differences in Reward Functions. In International Conference on Learning Representations, October 2020. C. A. E. Goodhart. Problems of Monetary Management: The UK Experience. In C. A. E. Goodhart (ed.), Monetary Theory and Practice: The UK Experience, pp. 91–121. Macmillan Education UK, London, 1984. ISBN 978-1-349-17295-5. doi: 10.1007/978-1-349-17295-5_4. Christopher A. Hennessy and Charles A. E. Goodhart. Goodhart’s Law and Machine Learning: A Structural Perspective. International Economic Review, 64(3):1075–1086, 2023. ISSN 1468-2354. doi: 10.1111/iere.12633. Borja Ibarz, Jan Leike, Tobias Pohlen, Geoffrey Irving, Shane Legg, and Dario Amodei. Reward learning from human preferences and demonstrations in Atari. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, pp. 8022–8034, Red Hook, NY, USA, December 2018. Curran Associates Inc. W. Bradley Knox, Alessandro Allievi, Holger Banzhaf, Felix Schmitt, and Peter Stone. Reward (Mis)design for autonomous driving. Artificial Intelligence, 316:103829, March 2023. ISSN 0004-3702. doi: 10.1016/j.artint.2022.103829. Victoria Krakovna, Jonathan Uesato, Vladimir Mikulik, Matthew Rahtz, Tom Everitt, Ramana Kumar, Zac Kenton, Jan Leike, and Shane Legg. Specification gaming: the flip side of AI ingenuity, 2020. URL https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity
X0fDR10B7c
page 3: it is noted that the way of removing impact of latent confounder is to do intervention on a randomly selected individual. Then with the intervened data, could we simply do the standard ATE calculation? What is the main advantage of PC model on causal inference compared with other methods?
Predictive Coding beyond Correlations Anonymous authors Paper under double-blind review Abstract Bayesian and causal inference are fundamental processes for intelligence. Bayesian inference describes observations: what can be inferred about y if we observe a related variable x? Causal inference models interventions: if we directly change x, how will y change? Predictive coding is a neuroscience-inspired method for performing Bayesian inference on continuous state variables using local information only. In this work, we show how a simple change in the inference process of predictive coding enables interventional and counterfactual inference in scenarios where the causal graph is known. We then extend our results, and show how predictive coding can be used in cases where the graph is unknown, and has to be inferred from observational data. This allows us to perform structure learning and causal query answering on predictive coding-based structural causal models. Empirically, we test our method on a large number of benchmarks, as well as presenting experiments that show potential applications in machine learning. 1 Introduction Predictive coding (PC) is an influential theory of learning and perception in the brain [Rao & Ballard, 1999; Salvatori et al., 2023; Millidge et al., 2021], with roots in Bayesian inference and signal compression. Conventional literature primarily deals with hierarchical models relating top-down predictions to internal states and external stimuli [Rao & Ballard, 1999; Friston, 2005, 2008; Whittington & Bogacz, 2017]. A recent work went beyond this, and has shown how PC can be used to perform inference and learning on structures with any topology [Salvatori et al., 2022a]. A natural consequence is that PC can be used to perform Bayesian inference on models with more entangled structures, such as directed graphical models with cyclic structures. This kind of inference is, however, limited to the computation of conditional probabilities, i.e., correlations. In this work, we question whether PC models have the capabilities to go beyond the computation of correlations, and show how it is possible to use the same model for causal inference [Pearl, 2009]. Research on causality is divided into two primary areas: causal inference, which aims to infer the effect of an intervention in a known system, and causal discovery, which aims to discover the causal graph underlying observational data. Here, we tackle both tasks, by first showing how PC is able to naturally model interventions using a differentiable framework that aims to minimize a variational free energy [Friston, 2005; Rao & Ballard, 1999], with only a simple adjustment to their standard Bayesian inference procedure, and then showing how to use the same framework to perform structure learning from observational data (up to an equivalence class, called Markov equivalence class). This shows that PC is an end-to-end causality engine, able to answer causal queries without previous knowledge of the parent-child relationships. The main goal of our work is not to solve open problems in the causality literature, but to show how it is possible to model interventions in a biologically plausible and efficient fashion, without the need of mutilating a graph, as it is instead done in Fig. 1. The rest of this work has the following structure: in Section 2, we review the basic concepts of Bayesian networks, and their connection with Pearlian’s causality. Then, we show how the PC framework developed to train graphs with arbitrary topologies [Salvatori et al., 2022a] can be used to perform conditional inference on Bayesian networks. In Section 3, we show how the same model can compute interventions by setting the prediction error of a specific node to zero during the inference process. Empirically, we test our claims on PC-based structural causal models, and show promising results on machine learning and causal query benchmarks [De Brouwer, 2022]. In Section 4, we show how PC graphs can perform structure learning from observational data. 2 BAYESIAN NETWORKS AND PREDICTIVE CODING Assume we have a set of $N$ random variables $\mathbf{X} = \{x_1, \ldots, x_N\}$, with $x_i \in \mathbb{R}^d$. Relations among variables are represented by a directed graph $G = (V, E)$, also called causal graph of $S$. Every vertex $v_i \in V$ represents a random variable $x_i$, and every edge $(i, j)$ represents a causal relation from $x_i$ to $x_j$. The causal graph defines the joint distribution of the system, computed as follows: $$p(x_1, \ldots, x_N) = \Pi_{i=1}^{N} p(x_i | \text{par}(x_i)),$$ with $\text{par}(x_i)$ being the parent nodes of $x_i$. In Fig. 1, we show a graph with joint probability $$p(x_1, x_2, x_3, x_4) = p(x_1)p(x_2)p(x_3 | x_1, x_2)p(x_4 | x_2, x_3).$$ Given the causal graph on the left, the arrows indicate the Bayesian networks that we need to perform conditional inference on, to compute correlations (top) and interventions (bottom). In a conditional query, we know from data that $x_3 = s_3$, and we infer the remaining variables by computing the posterior $p(x_1, x_2, x_4 | x_3 = s_3)$. In an interventional query, we compute $p(x_4 | \text{do}(x_3 = s_3))$. To do that, we have to first mutilate the structure of the graph, and then perform conditional inference on the new graph, with the joint probability $p(x_1, x_2, x_3, x_4) = p(x_1)p(x_2)p(x_4 | x_2, x_3)$. More formally, consider the partition $\mathbf{X} = \mathbf{X}_{\text{data}} \cup \mathbf{X}_{\text{unk}}$, where $\mathbf{X}_{\text{data}} = x_{i_1}, \ldots, x_{i_n}$ is a subset of variables of which we have information via a data point $\mathbf{S}_{\text{data}} = s_{i_1}, \ldots, s_{i_n}$. We want to infer the values of the unknown nodes. This is trivial when a data point is the root of a tree only formed by unknown variables. In this case, it is possible to compute them via a forward pass. The problem, however, becomes more complex, and often intractable, when it is necessary to infer missing nodes that are parents of data points, as we need to invert the generative model of specific nodes. When dealing with continuous variables, we can use PC to perform such an inversion [Friston, 2005]. Posterior distribution. In most cases, the computation of the posterior distribution $p(\mathbf{X}_{\text{unk}} | \mathbf{X}_{\text{data}} = \mathbf{S}_{\text{data}})$ is intractable. A standard approach is to use variational inference with an approximate posterior $q(\mathbf{X}_{\text{unk}})$ restricted to belong to a family of distributions of simpler form than the true posterior. To make this approximation as similar as possible to the true posterior, the KL-divergence between the two distributions is minimized. Since the true posterior is not known, we instead minimize an upper bound on this KL divergence, known as the variational free energy: $$F = \mathbb{E}_q[\log(q(\mathbf{X}_{\text{unk}})) - \log(p(\mathbf{X}_{\text{unk}}, \mathbf{X}_{\text{data}}))].$$ We consider every edge $(i, j)$ to be a linear map $W^{(i,j)}$ composed with a non-linearity $f(x)$ (such as ReLU). This defines how every parent node influences its child nodes. We further set the probability distribution of every node to be a multivariate Gaussian with unitary covariance matrix. In detail, every variable $x_i$ is sampled from a Gaussian distribution of mean $\mu_i$ and variance 1, where $$\mu_i = \sum_{k \in \text{par}(i)} W^{(k,i)} f(x_k). \quad (1)$$ To better derive a tractable formulation of the variational free energy, we use a mean-field approximation to assume a factorization into conditional independent terms, and assume that each of these terms is a Dirac delta (or, equivalently, a Laplace approximation). Note that these assumptions are standard in the literature [Friston, 2003; Friston et al., 2007; Millidge et al., 2021; Salvatori et al., 2022c], and lead to the following variational free energy: $$F = \sum_i \|x_i - \mu_i\|^2 + \ln(2\pi). \quad (2)$$ 2.1 PREDICTIVE CODING GRAPHS The derived variational free energy corresponds, up to an irrelevant constant, to the energy function of PC graphs, flexible models that can be queried in different ways [Salvatori et al., 2022a]. Each vertex $v_i$ of a PC graph encodes several quantities: the main one is the value of its activity, which changes over time, and we refer to it as a value node $x_{i,t}$. This is a parameter of the model, which is updated via gradient descent during inference. Additionally, each vertex has a prediction $\mu_{i,t}$ of its value node, based on input from value nodes of other vertices, as detailed in Eq. 1. The error of every vertex at every time step $t$ is then given by the difference between its value node and its prediction, i.e., $e_{i,t} = x_{i,t} - \mu_{i,t}$. This local definition of error allows PC graphs to learn using only local information. Here, we review the inference phase of PC, which computes correlations among data and results in an approximate Bayesian posterior over all node. It is also possible to train these models by updating the parameters $\mathbf{W}$ via stochastic gradient descent over a set of examples. For a detailed description of how learning on PC graphs work, we refer to Appendix A. Figure 1: Example socio-economic graph and its structure after conditioning and intervening on education level. Figure 2: (a) PC graph with the same causal structure of that in Fig. 1. Every vertex \( v_i \) is associated with a value node \( x_i \), and an error node \( e_i \). The arrows show the influence of every node to the others: the prediction information follows the direction of the arrows of the original graph, while the error information goes backwards. (b) Example of conditioning in PC graphs. We fix the value of \( x_3 \), making the effect of all the arrows entering \( v_3 \) irrelevant, as \( x_3 \) is fixed and hence ignores incoming information. This, however, does not apply to error information going out from \( v_3 \), which keeps influencing \( x_1 \) and \( x_2 \); this is solved in (c) Example of an intervention in PC graphs. According to Pearl’s causal theory, the do-operator on a node (\( v_3 \) in this case) removes the incoming edges, to avoid the newly introduced information to flow backwards and influence the parent nodes. As in PC the only information flowing opposite to the causal relations is the error information, an intervention can simply be performed by removing (or setting to zero) the error node. Query by conditioning. Assume that we are presented with a data point \( S_{data} = \{ s_1, \ldots, s_n \} \). Then, the value nodes \( x_{i_1}, \ldots, x_{i_n} \) of the corresponding vertices are fixed to the entries \( S_{data} \) for every \( t \), while the remaining ones are initialized to some random values, and continuously updated until convergence via gradient descent to minimize the energy function, following the rule \( \Delta x_{i,t} \propto \partial F_t / \partial x_{i,t} \). The unconstrained sensory vertices will converge to a minimum of the energy given the fixed vertices, thus computing the conditional expectation of the latent vertices given the observed stimulus. Formally, the inference step estimates the conditional expectation \[ E(X_T \mid \forall t : (x_{i_1,t}, \ldots, x_{i_n,t}) = (s_{i_1}, \ldots, s_{i_n})), \] where \( X_T \) is the matrix of all value nodes at convergence. This computes the correlation among different parameters of the causal graph. In the next section, we show how to model interventions in PC graphs. For a neural implementation of a PC graph, see Fig. 2(a), and for the neural implementation of a conditional query, where the value of a specific node is fixed to a data point, see Fig. 2(b). 3 Causal Inference via Predictive Coding The main goal of causal inference is to be able to simulate interventions in a process, and study the counterfactual effects of such interventions. In statistics, interventions are denoted by the \( do(-) \) operator [Pearl, 1995, 2009]. The value of a random variable \( x_i \) when performing an intervention on a different variable \( x_j \) is denoted by \( p(x_i \mid do(x_j = s)) \). This is equivalent to the question What would \( x_i \) be in this environment if we set \( x_j = s \)? In the case of the example in Fig. 1, the question could be What would the expected income level be, if we change the education level of this person? In fact, while ‘education’ and ‘income level’ may be correlated by a hidden confounder (intelligence, in this case), an intervention removes this correlation by changing the education level of a randomly selected individual, regardless of level of intelligence. To perform an intervention on a Bayesian network, we first have to act on the structure of the graph, and then query the model by conditioning on the new graph, as shown in Fig. 1. Assume that we have a graph \( G \), and we want to know the value of \( x_i \) after performing an intervention on \( x_j \) by fixing it to some value \( s \). This can be done according to the two following steps: 1. Generate a new graph \( G' \) by removing all the in-coming edges of \( v_i \) from \( G \). 2. Compute the conditional expectation \( E(X \mid x_j = s) \) using \( G' \). Interventional query. In a PC graph, the only information that flows in the opposite direction of an arrow is the prediction error. In fact, if we have \( v_1 \rightarrow v_2 \), the update of the value node \( x_1 \) is Figure 3: What would \( x_4 \) be, had \( x_3 \) been equal to \( s^*_3 \) in situation \( U = u \)? This figure provides an example of the three-step process to perform counterfactuals, using a structural causal model with four exogenous and four endogenous variables. We are given two kinds of data: the original values of \( x_1, \ldots, x_4 \), which correspond to past information, here denoted by \( s_1, \ldots, s_4 \), and the intervention information \( s^*_3 \), needed to understand the what would have happened to \( x_4 \) if we had changed \( s_3 \) to \( s^*_3 \). The final answer corresponds to the node \( \tilde{x}_4 \) obtained in the prediction step. affected by the error \( e_2 \). To avoid this and perform an intervention, we set the value of \( e_2 \) to zero throughout the inference phase. This is convenient, since it allows us to not directly act on the structure of the graph to perform interventions but rather perform them dynamically ‘at runtime’, which results in increased efficiency in the case of nodes with numerous incoming edges. We assume hard interventions over soft ones (Correa & Bareinboim 2020), thus eliminating all parent variable effects. This preserves the classical 3-step procedure for computing counterfactuals. The key distinction is how interventions are executed on the PC graph, specifically by nullifying the prediction error. This approach obviates the need for explicit adjustment formulas and back-door criteria in causal inference. Hence, we have the following theorem, proven in Appendix B. **Theorem 3.1.** Let \( G \) be a PC graph with structure given by a directed acyclic graph \( G \), and let us assume we perform an intervention \( do(x_j) = s \) via a conditional query on the mutilated PC graph. The distribution of the variables obtained is equivalent to the one obtained by running inference on the original PC graph while setting both \( x_{j,t} = s \) and \( e_{j,t} = 0 \) for every \( t > 0 \). That is, \[ E(X_T \mid do(x_j = s)) = E(X_T \mid \forall t : x_{j,t} = s, e_{j,t} = 0 \forall t). \] ### 3.1 Structural Causal Models While interventions serve to answer questions about the consequences of actions performed in the present, counterfactuals are used to study interventions in the past. For instance, we could ask: What would the value of \( x_i \) have been if \( x_j \) had been set to \( s^*_j \), given a particular context \( U = u \)? Using a concrete example, in reference to Fig. 1, What would this person’s income have been if they had earned a master’s degree, under the conditions defined by \( U = u \)? This is modeled using Structural Causal Models (SCMs). A SCM is a triple \((U, V, F)\), where \( V \) is the set of endogenous (observable) variables corresponding to the internal vertices of the causal graph, \( U \) is the set of exogenous (unobservable) variables that serve as root nodes in the graph, and \( F \) is the set of functions that determine the values of endogenous variables according to the structure of \( G \). An example of a SCM is represented in Fig. 3. Then, counterfactual inference with an SCM involves three steps: 1. **Abduction:** Here, we are provided with the values \((s_1, \ldots, s_N)\) of the endogenous nodes in \( V \). We use them to compute the values of the exogenous variables, which we denote by \( \tilde{u}_1, \ldots, \tilde{u}_N \). Hence, according to the following: \[ E(u_1, \ldots, u_N \mid \forall t : (x_1, \ldots, x_N) = (s_1, \ldots, s_N)). \] 2. **Action:** Now that we computed the values of the exogenous variables, we fix them and perform an intervention on \( x_j \). Particularly, we set \( x_j = s^*_j \), and we set \( e_j = 0 \), which has the effect of removing any influence of \( x_j \) on its parent nodes. 3. **Prediction:** We now have all the elements to compute the counterfactual on \( x_i \), which is: \[ E(x_i \mid \forall t : (u_1, \ldots, u_M) = (\tilde{u}_1, \ldots, \tilde{u}_M), x_j = s^*_j, e_j = 0). \] Figure 4: (a) How to compute a prediction given a data point on a fully connected PC graph, using interventional queries. (b) Test accuracy over epochs computed via query by conditioning and query by intervention. (c) Left to right: causal structure of the SCM. Convergence behavior of PC energy vs. error metric (MAE), during SCM learning for butterfly graph. Error (by node) of interventional query estimates on $x_3$ (yellow node). Error (by node) of counterfactual query estimates with intervention on $x_3$ given factual data (blue nodes). 3.2 EXPERIMENTS We now perform experiments that confirm the technical discussion and claims made in the previous section. To this end, we test PC graphs in their ability to compute causal queries on three levels of Pearl’s ladder of causation (2009), namely, association (level 1), intervention (level 2), and counterfactual (at level 3) for both linear and non-linear data. Then, we show how interventions can be used to improve the performance of classification tasks on fully connected models. We conclude with an experiment showcasing the robustness of PC graphs and their learned representations of complex data, specifically, with respect to counterfactual interventions on distinct attributes of images. We assume causal sufficiency (no unobserved confounders) and compare against works that make this assumption. The appendix contains extensive results and details to reproduce our approach. Causal Inference. We evaluate associational, interventional, and counterfactual queries on various causal graphs (Fig. 7) using linear and non-linear data. We assume that known graphs and the SCMs with additive Gaussian noise are unspecified. We generate synthetic observational, interventional, and counterfactual data for testing. The observational data are generated by randomly sampling values for the exogenous variables, $\mu$. We then use the exogenous values to compute the values of the endogenous variables $x$. Interventionsal data are similarly generated, but the structural equations are altered by an intervention. Finally, the counterfactual data consist of pairs, $(x, x')$ with $x$ being observational and $x'$ interventional data, both sharing the same $\mu$. To perform causal inference, we fit the PC graph to the observed data to learn the parameters of the structural equations, including proxy noise distribution parameters. We evaluate the learned SCM by comparing various difference metrics between true and inferred counterfactual values. Details on all metrics are in Appendix C. Here, we only provide results for the most interesting and complex graph among the proposed ones, namely, the butterfly, represented in Fig. 4(c). In Appendix C, we provide a detailed study of all the aforementioned graphs, on a large number of metrics. The results show that PC graphs accurately estimate causal queries for linear and more complex non-linear data. Our method outperforms state-of-the-art methods like CAREFL (Khemakhem et al., 2021), VACA (Sánchez-Martín et al., 2022), and MultiCVAE (Karimi et al., 2020) by large margins, while at the same time requiring significantly fewer parameters. The plots in Fig. 4(c) show that the model is able to correctly infer interventional and counterfactual queries, as shown by the converging MAE of non-intervened nodes. Finally, unlike (Khemakhem et al., 2021), we do not reduce the graph to its causal ordering and the performance of PC graphs remains stable as causal paths get longer, an issue seen in (Sánchez-Martín et al., 2022). Classification. In the original work on PC graphs (Salvatori et al., 2022a), the authors have trained a fully connected model to perform classification tasks using conditional queries. The performances are poor compared to those of hierarchical models for two reasons: first, conditional queries do not impose any direction to the information flow, making the graph learn $p(y|x)$ as well as $p(x|y)$, even though we only need the first term. Similarly, the model also learns $P(x)$ and $P(y)$, which is, how the prior depends on itself. Second, the model complexity is too high, not allowing the model to use any kind of background knowledge, such as hierarchical/structural information/prior, usually present in sparser models. Here, we address the first limitation by performing an intervention on the input: this prevents the error of the inputs to spread in the network, and enforces a specific direction of the information flow, which goes from cause (the image) to effect (the label). To assess the impact of such an intervention on the test accuracy, we train a fully connected model with 2000 neurons on the MNIST and FashionMNIST datasets, and compute the test accuracy for conditional and interventional queries. We perform a large hyperparameter search on learning rates, activation functions, and weight decay values. In all cases, performing an intervention improves the results. In Fig. 6(d), we present an example showcasing the best results obtained after hyperparameter search. The interventional query led to improvements in the final test accuracy of almost 2% for both datasets. The experiment details can be found in Appendix E. Robustness. A recent work demonstrates that existing deep-learning methods fail to obtain sufficient robustness or performance on counterfactual queries in certain scenarios (De Brouwer, 2022). We show that PC surpasses current state-of-the-art results for counterfactual queries while requiring a simpler architecture, and without relying on ad hoc training techniques. We evaluate the robustness of our model on counterfactual inference tasks of higher dimensions, thereby examining the feasibility of our method to perform causal inference on more complex data. The dataset we consider consists of tuples $(x, u_z, T, y, T', y')$, where $x$ is an image from the MNIST dataset, $T$ is the assigned treatment, which is a rotation angle (confounded by $x$). Furthermore, $u_z$ is a hidden exogenous random variable that determines the color of the observed outcome image $y$, that is added to the forth variable $y'$, a colored and rotated MNIST image representing the counterfactual response obtained when applying the alternative treatment $T'$. We consider SCMs with 4 nodes that encode the four variables, as sketched in Fig. 5. Here, every edge represents a feed-forward network of different depth and hidden dimension 1024. A detailed explanation of how to reproduce the results is given in Appendix E. Results. The results show that PC graphs improve on state-of-the-art methods, despite the fact that we do not use convolutional layers like in the original work. First, the generated images have an MSE on the test set ($0.0008 \pm 0.0002$) that is lower than that reported in the original work ($0.001 \pm 0.001$). The high quality of reconstruction is also visible in the generated images reported in Fig. 5. Compared to the work (De Brouwer, 2022), we are able to generalize to rotations of 40° (absent in the training data), even if this introduces some noise in the generated output. Furthermore, contrary to the original model, our architecture is robust with respect to the choice of the hyperparameter linked to $u_z$ and does not necessitate to perform a hyperparameter sweep to find the right value. Hence, we conclude that PC graphs are able to correctly model the treatment rotation in the counterfactual outcome, while keeping the color, which is independent of rotation, unchanged. 4 Structure Learning Learning the underlying causal structure from observational data is a critical and highly active research area, primarily due to its implications for explainability and modeling interventions. Traditional approaches use combinatorial search algorithms to find causal structures. However, such methods tend to become computationally expensive and slow as the complexity (e.g., the number of nodes) of the graph increases, as shown in previous works (Chickering, 1996, 2002). Therefore, we focus on gradient-based learning methods instead (Zheng et al., 2018), as these allow us to handle larger causal graph structures in a computationally efficient manner. Let us consider $A$ to be the adjacency matrix of a graph. Ideally, this matrix should be a binary matrix with the property that $a_{i,j} = 1$, if there exists an edge from $v_i$ to $v_j$, and 0, otherwise. From a Bayesian perspective, our method learns the marginal of the graph edges, where $A$ is a matrix composed of continuous, learnable parameters, which assign weights to signify importance of specific connections. To this end, we can consider every PC graph to be fully connected, where the prediction of every node $x_i$ now depends on the entries of the adjacency matrix: $$\mu_i = \sum_{k=0}^{N} a_{k,i} f_{k,i}(x_k),$$ update rule : $\Delta a_{i,j} \propto -\frac{\partial F}{\partial a_{i,j}} = \beta \cdot e_{i,T} W^\top f(x_{j,T}),$ (7) where $\beta$ is the learning rate of the parameters $a_{i,j}$. Then, the entries of $A$ are updated via gradient descent to minimize the variational free energy of Eq. 2. Our goal is to learn an acyclic, sparsely connected graph, which requires a prior distribution that enforces these two constraints. We consider three possible priors: a Gaussian prior, a Laplace prior, and the acyclicity prior. The latter prior is equal to zero if and only if the corresponding graph is acyclic (Zheng et al., 2018): $$l(A) = \exp(-\sum_{i,j} |a_{i,j}|), \quad g(A) = N(0, 1), \quad h(A) = \text{tr}(\exp(A \times A)) - d.$$ The energy function that we aim to minimize via gradient descent is given by the sum of the total energy, as defined in Eq. 2, and the three aforementioned prior distributions, each weighted by a scaling coefficient. The first two priors effectively apply the $L1$ and $L2$ norms to the parameters of the adjacency matrix, and they form the elastic norm when used in conjunction. **Negative Examples.** The regulariser $h(A)$ introduces an inductive bias that may be undesirable, as we know that cyclic structures may be beneficial in several tasks (Salvatori et al., 2022a). Without $h(A)$, however, training converges towards a degenerate structure, as shown on the top right of Fig. 6(c), where each output node predicts itself, ignoring any contribution of the input nodes. We solve this degeneration behavior by introducing negative examples, which are data points with a wrong label, into the training set. The model is then trained in a contrastive way (Chen et al., 2020), i.e., by increasing the prediction error of every node for negative examples $k$, and decreasing it otherwise, when the label is correct, as shown in Fig. 6(c). A detailed explanation of how training with negative samples works is given in Appendix C. We show that negative examples address the convergence issue towards a degenerate graph by rendering the label nodes contingent on the inputs, thus steering the model towards adopting a hierarchical structure instead. ### 4.1 Experiments We perform two different structure learning experiments. In the first, we assume that we are provided with non-interventional data generated by from a Bayesian network of unknown structure. The task is to retrieve the original graph starting with a fully connected PC graph, which is a standard problem in causal discovery (Morales-Alvarez et al., 2022; Getzner et al., 2022; Zheng et al., 2018). In the second experiment, we perform classification for MNIST and FashionMNIST using a fully connected PC graph. This time, however, we augment the classification objective with priors to enforce sparsity and acyclicity, and conjecture that (i) this improves the final test accuracy, and (ii) reduces a fully connected graph to the “correct” sparse network, i.e., the hierarchical one. **Structure Learning** Here, we use synthetic data sampled from an SCM with $N \in \{10, 15, 20\}$ nodes and $\{1, 2, 4\}$ expected edges per node. The graph structure is either an Erdős-Rényi or a scale-free random graph. To this end, we denote an Erdős-Rényi graph with $2N$ expected edges as ER2 or a scale-free graph with $4N$ expected edges as SF4, respectively. We vary the graph type, number of nodes and/or edges, to test the scalability and stability of each method. We place uniformly random edge weights onto a binary adjacency matrix of a graph, to obtain a weighted adjacency matrix, $W$. We sample observational data from a set of linear structural equations with additive Gaussian noise. Due to the linearity, we model each edge as a scalar. Hence, we can set the parameters of the weighted adjacency matrix to be the estimated model parameters $\hat{W}$. To prune the parameters that are not used in the linear structural equations that generate the observed data, we require our model to be sparse and acyclic. Thus, we consider the parameters to have prior distributions $h(W)$ and $l(W)$. Then, the experiment consists of training the fully connected model to fit the dataset, and checking whether the PC graph can converge to the random graph structure that generated the data. Figure 6: (a) Experiments on structure learning from synthetic data, generated from Erdős-Rényi and scale-free random graphs with 20 nodes. On the left, the connection strength of the true graph; on the right, the one learned by a PC graph. (b) Structure learning on the 2-MNIST dataset: the plot shows the weights of the adjacency matrix $A$ over the number of epochs, the sketch on the right the resulting PC graph. (c) A description of the two energy functions optimized by the PC graph when training on negative and non-negative examples. (d) Table with test error of all experiments performed on MNIST and FashionMNIST, averaged over three seeds. The best results are obtained when augmenting the training process with both the proposed structure learning methods. The results show that PC graphs are able to infer the structure of the data generating process for arbitrary dense random graphs of various complexities. The heatmaps in Fig. 6(a) show that our method can estimate the true weight adjacency matrix for dense SF4 and ER2 graphs with 20 nodes. Hence, we conclude that the learned adjacency matrix, which we chose as the median performing model across multiple seeds, is able to well capture the variable dependencies of the observed data. More details on the dataset generation, how the experiment is performed, quantitative results for various structure learning metrics, and detailed comparisons against highly used baseline methods (PC [Kalisch & Bühlman, 2007], GES [Chickering, 2002], NOTEARS [Zheng et al., 2018], and ICALiNGAM [Shimizu et al., 2006]), are provided in Appendix C. In contrast to the baselines, which all perform worse than our method, our algorithm sustains a stable performance in all ER and SF graph setups, as validated by structural Hamming distance (SHD), F1 score, and other metrics. Classification. Here, we extend the study on the experiments performed in Section 3, and check whether we are able to improve the results by allowing the PC graph to cut extra connections during training. Additionally, we create a new dataset, called 2-MNIST, whose data consist of pairs $(s_0, s_1)$ of MNIST images, and the label is the label of $s_0$. This is to check whether PC networks are able to understand the underlying causal structure of the dataset, and remove connections that start from $s_1$. As an architecture, we consider a PC graph with 6 nodes, one of dimension 784, one of dimension 10, and 4 hidden nodes of dimension $d$. In the case of 2-MNIST, we have two nodes of dimension 784, and only three of dimension $d$. The adjacency matrix $A$ has then dimension $6 \times 6$. Note that, when the entries of $A$ are all equal to one, then this model is equivalent to the fully connected one of Section 3. Here, however, we propose two techniques to augment the training process, and let the model converge to a hierarchical network. The first one consists of adding the three proposed priors on the matrix $A$, to enforce sparsity and acyclicity in the graph; the second one consists of augmenting the dataset via negative examples, while enforcing sparsity via the Laplace prior only. Note that enforcing acyclicity is fundamental, otherwise the circular dependencies in the graph would make the model converge to degenerate structures, such as the ones provided in the top right corner of Fig. 6. More details on this can be found in Appendix E. In the first experiment, we use the 2-MNIST dataset to test whether the acyclic and sparse priors are able to both remove the out-going connections from the second image $s_2$, and learn a hierarchical structure, which we know to be the best one to perform classification on MNIST. In the second experiment, we train the same fully-connected model of Section 3 and check whether the priors allow... to increase the classification accuracy of the model. To conclude, we perform a classification task with the negative examples and the Laplace prior, to test whether this method also allows to avoid converging to degenerated graph structures. The results on the 2-MNIST dataset show that the model immediately prunes the parameters out-going from $s_2$. In the first 100 epochs, the edge with the largest weight is the linear one, which directly connects the input to the label. While this shows that the model correctly learned the dependencies, linear classification on MNIST and FashionMNIST does not yield great accuracies. This problem is naturally addressed in the later stages of the training process, where the entry of the adjacency matrix relative to the linear map loses weights, and hence influence on the final performance of the model. When training finally converges, the resulting model is hierarchical, with one hidden layer, as shown in the plot in Fig. 6(b). This shows that PC graphs are not only able to learn the causal dependencies correctly, but also to be able to discriminate among these structures, and converge to a well performing one. In the second experiment, where we perform classification on MNIST and FashionMNIST with $h(A)$, the model shows a clear improvement over the baseline proposed in Section 3. The same applies for the training with negative examples, as reported in the table in Fig. 6(d), which shows a performance comparable to these of training with an acyclicity prior. To reach the usual results that can be obtained via standard neural networks trained with backpropagation (i.e., a test error < 2%), it suffices to fine-tune the model using the newly learned structure. 5 RELATED WORK In the last years, there have been numerous works that have tackled machine learning problems using PC networks. They have been shown to perform well in classification tasks using all kinds of architectures, such as feedforward and convolutional models, graph neural networks, and transformers (Whittington & Bogacz 2017; Han et al. 2018; Salvatori et al. 2022c; Byiringiro et al. 2022; Pinchetti et al. 2022). These results are partially justified by some similarities that PC shares with backprop when performing supervised learning (Song et al. 2020; Millidge et al. 2020; Salvatori et al. 2022b). Multiple works have also applied it to tasks such as image generation (Ororbia & Kifer 2022; Ororbia & Mali 2019), continual learning (Ororbia et al. 2022; Song et al. 2022), and associative memories (Salvatori et al. 2021; Yoo & Wood 2022; Tang et al. 2023). Causality has found applications in problems such as treatment effect estimation, time series modeling, image generation, and natural language processing, as well as enhancing interpretability and fairness in machine learning (Shalit et al. 2017; Runge et al. 2019; Lopez-Paz et al. 2017; Kaushik et al. 2019; Kusner et al. 2017). Different works have used deep generative modeling techniques to investigate causality problems, such as Graph Neural Networks, Variational Autoencoders, flow and diffusion models (Sanchez & Tsafattaris 2022; Khemakhem et al. 2021; Karimi et al. 2020; Pawlowski et al. 2020; Yu et al. 2019). Some works study the problem of learning the causal structure from observational data, previously done via combinatorial search (Spirtes et al. 2000; Chickering 2002; Shimizu et al. 2006; Kalisch & Bühlman 2007). However, combinatorial searches algorithm grow double exponentially in complexity with respect to the dimension of the graph. To this end, recent works mostly performing continuous optimization by using the acyclic prior we have also discussed in our work (Zheng et al. 2018; Bello et al. 2022; Yu et al. 2019). 6 CONCLUSION We have provided a bridge between the fields of causality and computational neuroscience by showing that predictive coding graphs have the ability of both learning the DAG structures from observational data, and modeling associational, interventional, and counterfactual distributions (Gettner et al. 2022; Sharma & Kiciman 2020). This makes our method suitable candidate for an end-to-end causality engine, which can answer causal queries without knowing detailed structural equations of an SCM. In the case of causal inference, we have shown how interventions can be performed by setting prediction errors of nodes that we are intervening on to zero, and how this leads to the formulation of predictive-coding based structural causal models that go beyond correlations. For structure learning, we have shown how to use existing techniques to derive causal relations from observational data. More generally, this work further highlights the flexibility of predictive coding models, which can be used to both train deep neural networks that perform well on different machine learning tasks, and to perform causal inference on directed graphical models, extending the toolbox of computational neuroscientists allowing them to compute causal queries and structure learning. REFERENCES Kevin Bello, Bryon Aragam, and Pradeep Ravikumar. DAGMA: Learning DAGs via M-matrices and a log-determinant acyclicity characterization. In *Advances in Neural Information Processing Systems*, 2022. Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Occam’s razor. *Information Processing Letters*, 24(6):377–380, 1987. Billy Byiringiro, Tommaso Salvatori, and Thomas Lukasiewicz. Robust graph representation learning via predictive coding. *arXiv:2212.04656*, 2022. Patrick Chao, Patrick Blöbaum, and Shiva Prasad Kasiviswanathan. Interventional and counterfactual inference with diffusion models. *arXiv:2302.00860*, 2023. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. *Proceedings of the 37th International Conference on Machine Learning*, 2020. David Maxwell Chickering. Learning Bayesian networks is NP-complete. *Learning from Data: Artificial Intelligence and Statistics V*, pp. 121–130, 1996. David Maxwell Chickering. Optimal structure identification with greedy search. *Journal of Machine Learning Research*, 3(Nov):507–554, 2002. Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). *arXiv preprint arXiv:1511.07289*, 2015. Juan Correa and Elias Bareinboim. A calculus for stochastic interventions: Causal effect identification and surrogate experiments. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pp. 10093–10100, 2020. Edward De Brouwer. Deep counterfactual estimation with categorical background variables. *Advances in Neural Information Processing Systems*, 35:35213–35225, 2022. Karl Friston. Learning and inference in the brain. *Neural Networks*, 16(9):1325–1352, 2003. Karl Friston. A theory of cortical responses. *Philosophical Transactions of the Royal Society B: Biological Sciences*, 360(1456), 2005. Karl Friston. Hierarchical models in the brain. *PLoS Computational Biology*, 2008. Karl Friston, Jérémie Mattout, Nelson Trujillo-Barreto, John Ashburner, and Will Penny. Variational free energy and the Laplace approximation. *Neuroimage*, 2007. Tomas Geffner, Javier Antoran, Adam Foster, Wenbo Gong, Chao Ma, Emre Kiciman, Amit Sharma, Angus Lamb, Martin Kukla, Nick Pawlowski, et al. Deep end-to-end causal inference. *arXiv:2202.02195*, 2022. Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012. Kuan Han, Haiguang Wen, Yizhen Zhang, Di Fu, Eugenio Culurciello, and Zhongming Liu. Deep predictive coding network with local recurrent processing for object recognition. *Advances in Neural Information Processing Systems*, 31, 2018. Markus Kalisch and Peter Bühlman. Estimating high-dimensional directed acyclic graphs with the PC-algorithm. *Journal of Machine Learning Research*, 8(3), 2007. Amir-Hossein Karimi, Julius Von Kügelgen, Bernhard Schölkopf, and Isabel Valera. Algorithmic recourse under imperfect causal knowledge: A probabilistic approach. *Advances in Neural Information Processing Systems*, 33:265–277, 2020. Divyansh Kaushik, Eduard Hovy, and Zachary C. Lipton. Learning the difference that makes a difference with counterfactually-augmented data. *arXiv:1909.12434*, 2019.
4aywmeb97I
Federated learning often operates under constraints like limited bandwidth, varied data quality, or privacy regulations. How does the CA2FL method, with its caching and asynchronous aggregation, perform under such practical constraints?
Tackling the Data Heterogeneity in Asynchronous Federated Learning with Cached Update Calibration Yujia Wang\textsuperscript{1}, Yuanpu Cao\textsuperscript{1}, Jingcheng Wu\textsuperscript{2}, Ruoyu Chen\textsuperscript{2}, Jinhui Chen\textsuperscript{1} \textsuperscript{1}The Pennsylvania State University \textsuperscript{2}Carnegie Mellon University \{yjw5427, ymc5533\}@psu.edu, \{jingchew, ruoyuche\}@andrew.cmu.edu, jzc5917@psu.edu Abstract Asynchronous federated learning, which enables local clients to send their model update asynchronously to the server without waiting for others, has recently emerged for its improved efficiency and scalability over traditional synchronized federated learning. In this paper, we study how the asynchronous delay affects the convergence of asynchronous federated learning under non-i.i.d. distributed data across clients. Through the theoretical convergence analysis of one representative asynchronous federated learning algorithm under standard nonconvex stochastic settings, we show that the asynchronous delay can largely slow down the convergence, especially with high data heterogeneity. To further improve the convergence of asynchronous federated learning under heterogeneous data distributions, we propose a novel asynchronous federated learning method with a cached update calibration. Specifically, we let the server cache the latest update for each client and reuse these variables for calibrating the global update at each round. We theoretically prove the convergence acceleration for our proposed method under nonconvex stochastic settings. Extensive experiments on several vision and language tasks demonstrate our superior performances compared to other asynchronous federated learning baselines. 1 Introduction Federated learning (FL) \cite{konevcny2016federated} has become an increasingly popular large-scale machine learning paradigm where machine learning models are trained on multiple edge clients guided by a central server. FedAvg \cite{mcmahan2017communication, mcmahan2017federated}, also known as Local SGD \cite{stich2018local}, is one of the most popular federated optimization methods, where each client locally performs multiple steps of SGD updates followed by the synchronous server aggregation of the local models. However, the traditional synchronous aggregation scheme may cause efficiency and scalability issues as the server need to wait for all participating clients to complete the task before conducting the global update step. This promotes the development of asynchronous federated learning methods such as FedAsync \cite{xie2019federated}, FedBuff \cite{nguyen2022fedbuff}, which adopt flexible aggregation schemes and allow clients to asynchronously send back their model update and thus improve the overall training efficiency and scalability. Such an asynchronous aggregation scheme does not come with no costs: the asynchronous delay, which describes the fact that the delayed local model update could be computed based on a past global model rather than the current global model, slows down the convergence of asynchronous federated learning. Moreover, the negative impact of the asynchronous delay on the convergence gets even worse when the training data are non-i.i.d. distributed across clients. This is intuitive since empirical observation suggests that the global model changes more significantly in adjacent rounds when the data heterogeneity is high. Consequently, the asynchronous delay would cause the delayed local model update to be more outdated and inconsistent with the current global model, hence worsening the overall model convergence. Furthermore, each global update in asynchronous federated learning... is, by nature, contributed from only a fraction of clients (similar to the partial participation scenarios in synchronous FL). This intensifies the global variance arising from data heterogeneity and leads to a further slowdown in convergence. Therefore, it is crucial to tackle the data heterogeneity issue to improve the overall convergence of asynchronous federated learning. In this work, we rigorously study how the asynchronous delay affects the convergence of asynchronous federated learning under non-i.i.d. distributed data across clients. Through a theoretical convergence analysis of FedBuff (Nguyen et al., 2022), one representative asynchronous federated learning algorithm, we show that the asynchronous delay can largely slow down the convergence, especially with high data heterogeneity. To further improve the convergence of asynchronous federated learning under heterogeneous data distributions, we propose a novel asynchronous federated learning method, Cache-Aided Asynchronous Federated Learning (CA²FL). CA²FL allows the server to cache the latest update from each client and reuses this cached update for calibrating the global update, which does not incur extra communication/computation overhead on clients, or raise any additional privacy concerns (same as the traditional synchronous and asynchronous federated learning methods). We summarize our contribution in this paper as follows: - We present a convergence analysis of FedBuff (Nguyen et al., 2022), one representative asynchronous federated learning algorithm, under non-i.i.d. distributed data across clients (with fewer assumptions and slightly tighter bound on the asynchronous delay term). We demonstrate that the asynchronous delay can theoretically slow down the convergence and such an impact could be further amplified by the highly non-i.i.d. distributed data. - To tackle the convergence degradation in asynchronous federated learning caused by the joint effect of data heterogeneity and asynchronous delay, we propose a novel asynchronous federated aggregation method with cached update calibrations (CA²FL) in which the server maintains cache updates for each client and reuse the cached update for global aggregation calibration. We theoretically show that with the help of cached updates, our proposed method can significantly improve the convergence rate under nonconvex stochastic settings. - Extensive experiments on several vision and language tasks demonstrate our superior performances compared to other asynchronous federated learning baselines and back up our theory. 2 RELATED WORK Synchronous FL and Heterogeneity Issues. Federated learning (Konečný et al., 2016) plays a critical role in jointly training models at edge devices without sharing local data. Since FedAvg (McMahan et al., 2017), many federated learning variants are proposed (Li et al., 2019b; Stich, 2018; Yang et al., 2021) for various training scenarios. Reddi et al. (2021); Tong et al. (2020); Wang et al. (2022) propose adaptive federated optimizers for dealing with heavy-tail stochastic gradient noise distributions. Gu et al. (2021); Yan et al. (2020) focus on improving the overall FL performance by leveraging the latest historical gradients. Recently, many works also focused on addressing the data heterogeneity issue through several aspects. FedProx (Li et al., 2020) adds a proximate term to align the local model with the global one. FedDyn (Acar et al., 2021) involves a dynamic regularization term for local and global model consistency. FedNova (Wang et al., 2020b) proposes a normalized averaging mechanism that reduces objective inconsistency with heterogeneous data. Moreover, several works studied how to eliminate the client drift caused by data heterogeneity from the aspect of variance reduction including Karimireddy et al. (2020a,b); Khanduri et al. (2021); Cutkosky & Orabona (2019); Jhunjhunwala et al. (2022). They usually introduce additional control variables to track and correct the local model shift during local training, at the cost of extra communications for synchronizing these control variables. Besides, FedDC (Gao et al., 2022) involves both dynamic regularization terms and local drift variables for model correction. Asynchronous SGD and Asynchronous FL. Asynchronous optimization methods such as asynchronous SGD and its variants have been discussed for many years. Hogwild! SGD (Niu et al., 2011) studies a coordinate-wise asynchronous method without any locking, and Nguyen et al. (2018) 1Here we focus on the FedBuff algorithm without differential privacy for the entire paper. provided a tight convergence analysis for SGD and Hogwild! algorithm. Some other works focus on the theoretical analysis for the asynchronous SGD such as (Mania et al., 2017; Stich et al., 2021). Leblond et al. (2018) studies the asynchronous SAGA method and demonstrates its theoretical convergence. Glasgow & Wootters (2022) explored asynchronous SAGA methods for the distributed-data setting and provided a theoretical analysis. In the context of federated learning, FedAsync (Xie et al., 2019) is proposed for clients to update asynchronously to the server. FedBuff (Nguyen et al., 2022) proposed a buffered asynchronous aggregation strategy. Later Toghani & Uribe (2022) studies the convergence analysis of FedBuff with fewer assumptions. Anarchic Federated Averaging (Yang et al., 2022) focuses on letting the clients decide when and whether to participate in global training. Stripelis et al. (2022) proposed a semi-synchronous federated learning method for energy-efficient training and accelerating convergence in cross-silo settings. SWIFT (Bornstein et al., 2023) is an interesting wait-free decentralized FL paradigm that shares a similar idea to asynchronous FL, and SWIFT also involves caching models by storing the neighboring local models. Moreover, there are several works studying the theoretical convergence analysis in asynchronous federated learning with arbitrary delay (Avdiukhin & Kasiviswanathan, 2021; Mishchenko et al., 2022) or the complete theoretical analysis under various assumptions (Koloskova et al., 2022). ### 3 Preliminaries Findings on Asynchronous Federated Learning **Federated Learning.** In general federated learning framework, we aim to minimize the following objective through $N$ local clients: $$\min_{x \in \mathbb{R}^d} f(x) := \frac{1}{N} \sum_{i=1}^{N} F_i(x) = \frac{1}{N} \sum_{i=1}^{N} \mathbb{E}_{\xi \sim D_i}[F_i(x; \xi_i)],$$ (3.1) where $x$ represents the model parameters with $d$ dimensions, $F_i(x) = \mathbb{E}_{\xi \sim D_i}[F_i(x; \xi_i)]$ represents the local loss function corresponding to client $i$ and let $D_i$ denotes the local data distribution on client $i$. FedAvg (McMahan et al., 2017) is a popular synchronous optimization algorithm to solve Eq. 3.1 where each participating client performs local SGD updates, and the server performs global averaging steps after receiving all the updates from assigned clients. **Asynchronous Federated Learning.** Asynchronous federated learning has been introduced to facilitate efficiency and scalability for clients in solving Eq. 3.1 asynchronously. In asynchronous federated learning, clients are allowed to train and synchronize local models on their own pace. For example, FedBuff (Nguyen et al., 2022) studied an asynchronous federated learning method with a global update buffer and differential privacy mechanism. To give a more concrete idea, we present a general asynchronous federated learning framework, which is essentially FedBuff without the differential privacy part, as shown in Algorithm 1. Specifically, the server initializes by randomly selecting an active client set $\mathcal{M}_t$ with the size of the concurrency $M_c$. Then each assigned client will conduct $K$ steps of local training asynchronously. This means the server does not need to wait until all assigned clients finish their local training to proceed, instead, the server just accumulates the model update in $\Delta_t$ (Line 5 in Algorithm 1) and updates the global model every time it accumulates $M$ updates (Lines 9-11 in Algorithm 1). Meanwhile, once the server receives a client update, it will instantly re-sample another available client to continue the federated learning procedure. In this way, the server always maintains a fixed number of active clients (i.e., the concurrency $M_c$). **Heterogeneity Across Clients.** Several works (Karimireddy et al., 2020b; Acar et al., 2021; Wang et al., 2020b) have shown that synchronized federated learning methods suffer from convergence and empirical degradation when data is heterogeneously distributed across local clients. This issue of model inconsistency also occurs in asynchronous federated learning and may even become worse with the existing of gradient delay, since the model used for local gradient computation is usually different from the current global model, which makes local updates less representative of the global update direction. In order to formally illustrate such a relationship, we conduct the following convergence analysis on Algorithm 1 under standard stochastic nonconvex optimization settings. First, we introduce some necessary assumptions. --- 2 The concurrency implies that the maximum size of the simultaneously active clients is $M_c$. 3 $M$ denotes the buffer size as in Fedbuff and $M \geq 1$. Algorithm 1 FedBuff without DP Input: local step size $\eta_l$, global stepsize $\eta$, server concurrency $M_c$, buffer size $M$; 1: Initialize $\Delta_1 = 0$, $m = 0$ and sample a set of $M_c$ active clients to run local SGD updates. 2: repeat 3: if receive client update then 4: Server accumulates update from client $i$: $\Delta_t \leftarrow \Delta_t + \Delta_i^t$ and set $m \leftarrow m + 1$ 5: Samples another client $j$ from available clients 6: Broadcast the current model $x_t$ to client $j$, and run local SGD updates on client $j$ 7: end if 8: if $m = M$ then 9: Update global model $x_{t+1} = x_t + \eta \frac{\Delta_t}{M}$ 10: Set $m \leftarrow 0$, $\Delta_{t+1} \leftarrow 0$, $t \leftarrow t + 1$ 11: end if 12: until Convergence Assumption 3.1 (Smoothness). Each loss function on the $i$-th worker $F_i(x)$ is $L$-smooth, i.e., $\forall x, y \in \mathbb{R}^d$, $$|F_i(x) - F_i(y) - \langle \nabla F_i(y), x - y \rangle| \leq \frac{L}{2} \|x - y\|^2.$$ This also implies the $L$-gradient Lipschitz condition, i.e., $\|\nabla F_i(x) - \nabla F_i(y)\| \leq L \|x - y\|$. Assumption 3.1 is a standard assumption in nonconvex optimization problems, which has been also adopted in [Kingma & Ba, 2015; Reddi et al., 2018; Li et al., 2019a; Yang et al., 2021]. Assumption 3.2 (Bounded Variance). Each stochastic gradient on the $i$-th worker has a bounded local variance, i.e., for all $x, i \in [N]$, we have $\mathbb{E}\left[\|\nabla f_i(x, \xi) - \nabla F_i(x)\|^2\right] \leq \sigma^2$, and the loss function on each worker has a global variance bound, $\frac{1}{N} \sum_{i=1}^{N} \|\nabla F_i(x) - \nabla f(x)\|^2 \leq \sigma_g^2$. Assumption 3.2 is widely used in federated optimization problems [Li et al., 2019a; Reddi et al., 2021; Yang et al., 2021]. The bounded local variance represents the randomness of stochastic gradients, and the bounded global variance represents data heterogeneity between clients. Note that $\sigma_g = 0$ corresponds to the i.i.d setting, in which datasets from each client have the same distribution. Assumption 3.3 (Bounded Gradient Delay). Let $\tau_i^t$ represent the delay for global round $t$ and client $i$ which is applied in Algorithm 1 and 2. $\tau_i^t$ implies the difference between the current global round $t$ and the global round at which client $i$ started to compute the gradient. We assume that the maximum gradient delay is bounded, i.e., $\tau_{\text{max}} = \max_{t \in [T], i \in [N]} \{\tau_i^t\} < \infty$. Assumption 3.3 is a common assumption in convergence analysis for asynchronous federated learning method [Koloskova et al., 2022; Yang et al., 2020]. Note that Assumption 3.3 naturally means that the average delay $\tau_{\text{avg}} = \frac{1}{NT} \sum_{t=1}^{T} \sum_{i=1}^{N} \tau_i^t < \infty$ is bounded. Theorem 3.4. Under Assumptions 3.1, 3.3 denote $f_* = \arg \min_x f(x)$ and $f_1 = f(x_1)$, let $T$ be the total global rounds and $K$ be the number of local SGD training steps. If the local learning rate $\eta = \Theta(\sqrt{KM})$ and $\eta_l = \Theta(1/\sqrt{TK})$ then the global rounds of Algorithm 1 satisfy $$\frac{1}{T} \sum_{t=1}^{T} \mathbb{E}\left[\|\nabla f(x_t)\|^2\right] = O\left(\frac{(f_1 - f_*) + \sigma^2}{\sqrt{TKM}}\right) + O\left(\frac{\sigma^2 + K\sigma_g^2}{TK}\right) + O\left(\frac{\sqrt{K}}{\sqrt{TM}}\sigma_g^2\right) + O\left(\frac{K\tau_{\text{max}}\tau_{\text{avg}}\sigma_g^2 + \tau_{\text{max}}\sigma^2}{T}\right).$$ (3.2) Remark 3.5. Theorem 3.4 presents the convergence analysis for Algorithm 1 w.r.t. global communication round $T$, local steps $K$ and the update accumulation amount $M$. From Eq. equation 3.2 it can be seen that the maximum delay $\tau_{\text{max}}$ and the average delay $\tau_{\text{avg}}$ term indeed affects the overall convergence of the asynchronous federated learning algorithm. Particularly, the last term involves joint effect term $O(K\tau_{\text{max}}\tau_{\text{avg}}\sigma_g^2/T)$ where the global variance $\sigma_g^2$ and the delay terms $\tau_{\text{max}}$ and $\tau_{\text{avg}}$ are multiplied together. This implies that the convergence degradation brought by the asynchronous delay is amplified by the high data heterogeneity (large $\sigma_g$). If data are i.i.d. distributed across clients, i.e., $\sigma_g = 0$, then $O(K \tau_{\text{max}} \tau_{\text{avg}} \sigma_g^2 / T)$ term vanishes to 0. On the other hand, if data are non-i.i.d. distributed, i.e., $\sigma_g \neq 0$, the term $O(K \tau_{\text{max}} \tau_{\text{avg}} \sigma_g^2 / T)$ will largely slow down the overall convergence (in fact, when $T \leq KM$, this term would become the dominant term in the convergence rate). This verifies our intuition that the data heterogeneity can worsen the impact of asynchronous delay and jointly deteriorate the convergence, which motivates us to develop a novel method for reducing such joint effects and improving the convergence for asynchronous federated learning. Compared to the original analysis in FedBuff, our analysis requires fewer assumptions and enjoys a slightly tighter bound on the asynchronous delay term. 4 Proposed Method: Cache-Aided Asynchronous FL To address the challenges of data heterogeneity and gradient delay across clients and achieve better convergence in asynchronous federated learning, we propose a novel Cache-Aided Asynchronous FL (CA$^2$FL) method. The proposed CA$^2$FL enables the server to maintain and reuse the cached updates for global update calibration. Algorithm 2 summarizes our proposed CA$^2$FL. In general, the CA$^2$FL largely follows the FedBuff framework in Algorithm 1, while the main difference between our proposed CA$^2$FL and Algorithm 1 lies primarily in the global update steps. Specifically, we introduce a cached variable updating shown in Line 5 and 13, and we incorporate a global calibration process in Line 4 and 11. **Algorithm 2** Cached-Aided Asynchronous FL **Input:** local step size $\eta_l$, global stepsize $\eta$, server concurrency $M_c$, buffer size $M$; 1: Initialize $\Delta_1 = 0$, $h_i^1 = 0$ for $i \in [N]$, $h_1 = 0$, $m = 0$ and sample a set of $M_c$ active clients to run local SGD updates. 2: repeat 3: if receive client update then 4: Server accumulates calibrated update from client $i$: $\Delta_t \leftarrow \Delta_t + (\Delta_t^i - h_i^t)$ 5: Server update clients’ cached variables: $h_i^{t+1} = \Delta_t^i$ 6: Set $m \leftarrow m + 1$, $S_t \leftarrow S_t \cup \{i\}$ 7: Samples another client $j$ from available clients 8: Broadcast the current model $x_t$ to client $j$, and run local SGD updates on client $j$ 9: end if 10: if $m = M$ then 11: $v_t = h_t + \frac{1}{|S_t|} \Delta_t$ 12: Update global model $x_{t+1} = x_t + \eta v_t$ 13: Server maintains the cached variable $h_i^{t+1} = h_i^t$ for $i \notin S_t$ 14: Server initialize $h_{t+1} = \frac{1}{N} \sum_{i=1}^{N} h_i^{t+1}$ 15: Set $m \leftarrow 0$, $\Delta_{t+1} \leftarrow 0$, $S_{t+1} \leftarrow \emptyset$, $t \leftarrow t + 1$, 16: end if 17: until Convergence **Cached variable update.** In CA$^2$FL, the server maintains the latest cached update for each client, and reuses this cached update as an approximation of each client’s contribution to the current round’s update. Denote $h_i^t$ as the latest cached variable for client $i$ and $h_t$ as the global cached variable which is the average of $h_i^t$ among all clients, i.e., $h_t = \frac{1}{N} \sum_{i=1}^{N} h_i^t$. Once the server received $\Delta_t^i$ from client $i$, then the server updates the cached variable for it, i.e., $h_i^{t+1} = \Delta_t^i$ (Line 6). For clients which don’t contribute to round $t$, the server keeps the state variable unchanged as $h_i^{t+1} = h_i^t$ (Line 14). This update rule for cached variable enforces the server maintains the latest model update difference for each client for global update calibration. --- Due to space limitations, we leave further discussions about Theorem 5.4 and the comparison with FedBuff analysis in (Toghani & Uribe, 2022) in Appendix B. Global calibration. Once client $i$ finishes the local training and sends $\Delta_i^t$ to the server, the server accumulates $\Delta_i^t - h_i^t$ to $\Delta_t$. Let $S_t$ represent a set of clients in which the server received their updates at round $t$. After receiving $M$ updates, we calculate the server update $v_t$ as the summation of the global cached variable $h_t$ with the average global update $\frac{\Delta_t}{|S_t|}$ (Line 11). The global model $x_{t+1}$ is then updated by this calibrated variable $v_t$. Note that $v_t$ is actually a linear combination in terms of the latest received model update difference $\Delta_i^t$ and cached variable $h_t$, i.e., $$v_t = h_t + \frac{1}{|S_t|} \sum_{i \in S_t} (\Delta_i^t - h_i^t).$$ Discussion. The design (Eq. 4.1) for the calibration and cached variables felt somewhat similar to SAGA (Defazio et al., 2014), a well-recognized stochastic variance-reduction method that stores previously computed gradients and leverages them for reducing the gradient variance. Eq. 4.1 looks like a special form of SAGA by treating model update difference $\Delta_i^t$ as gradients and applied globally over different clients. However, it is important to note that our method does not adhere to the properties of unbiased incremental gradients that SAGA mainly relies on for its variance reduction purposes, which makes our theoretical analysis non-trivial and different from that of SAGA. Therefore, CA$^2$FL should not be considered as a direct application of SAGA to asynchronous federated learning. Note that CA$^2$FL does not require extra communication and computation overhead on clients, and it is compatible with privacy preserving approaches such as differential privacy and secure aggregation. 5 CONVERGENCE ANALYSIS We first introduce the additional assumption needed for the convergence analysis of our proposed CA$^2$FL algorithm. Assumption 5.1 (Bounded State Delay). Let $\zeta_j^t$ represent the delay of the state variable for global round $t$ and client $j \notin S_t$ in Algorithm 2. $\zeta_j^t$ is state in the context of client $j$ which does not update the model difference in round $t$ and then maintains the state variable $h_j^t$ as the last step. $\zeta_j^t$ implies the difference between the current global round $t$ and the global round at which this client $j$ started to compute the last gradient. We assume that the maximum gradient delay is also bounded, i.e., $\zeta_{\text{max}} = \max_{t \in [T], j \in [N]} \{\zeta_j^t\} < \infty$. Assumption 5.1 is also commonly used in convergence analysis for memory-aided federated learning methods (Gu et al., 2021; Yang et al., 2022). In a nutshell, the state delay describes how many global rounds has it been since the last local training for a client. In the following, we will show the convergence results for our proposed CA$^2$FL. Theorem 5.2. Under Assumptions 3.1, 3.3, and Assumption 5.1 if the local learning rate $\eta = \Theta(\sqrt{KM})$ and $\eta_l = \Theta(1/\sqrt{TK})$ then the global rounds of Algorithm 2 satisfy $$\frac{1}{T} \sum_{t=1}^{T} \mathbb{E}[\|\nabla f(x_t)\|^2] = O\left(\frac{f_1 - f_*}{\sqrt{TKM}}\right) + O\left(\frac{\sigma^2}{\sqrt{TKM}}\right) + O\left(\frac{\sigma^2 + K\sigma_g^2}{TK}\right) + O\left(\frac{(\tau_{\text{max}} + \zeta_{\text{max}})\sigma^2}{T}\right),$$ where $f_* = \arg\min_x f(x)$. Remark 5.3. Theorem 5.2 suggests that with a sufficient amount of global rounds $T$, i.e., $T \geq KM$, our proposed CA$^2$FL method achieves a desired convergence rate of $O(\frac{1}{\sqrt{TKM}})$ w.r.t. global round $T$, local steps $K$ and the update accumulation amount $M$, which matches the convergence rate in traditional synchronous federated learning baselines (Yang et al., 2021; Reddi et al., 2021; Jhunjhunwala et al., 2022). Remark 5.4. Compared with Eq. 3.2, the joint effect term $O(K\tau_{\text{max}}\tau_{\text{avg}}\sigma_g^2/T)$ no longer exists, while in Eq. 5.1, the asynchronous delay $\tau_{\text{max}}$ only relates to the stochastic noise $\sigma$. This suggests that our proposed CA$^2$FL can benefit from the design of reusing the cached update for global update calibration, which tackles the data heterogeneity issue across clients and reduces the joint impact caused by the asynchronous delay and data heterogeneity. Note that our design also contributes to the general data heterogeneity issue in that the $O(\frac{\sqrt{K}}{\sqrt{TM}}\sigma_g^2)$ term in Eq. 3.2 also gets smaller. Together, those two improvements finally lead to a better convergence rate for our proposed CA$^2$FL algorithm. 6 EXPERIMENTAL RESULTS Datasets, models, and methods. We present the experimental results on both vision and language tasks to verify the effectiveness of the proposed method. For the vision tasks, we train the CIFAR-10 dataset with CNN (Wang & Ji, 2022) and ResNet-18 (He et al., 2016) models, and we also train CIFAR-100 (Krizhevsky et al., 2009) datasets with ResNet-18 model, and we provide various data sampling levels and client concurrency settings. For the language tasks, we conduct experiments on fine-tuning a pretrained Bert-base model (Devlin et al., 2018) on several datasets in GLUE benchmark (Wang et al., 2018). We evaluate experiments on non-i.i.d. data distributions by a Dirichlet distribution partitioned strategy similar to (Wang et al., 2020a,b) with several parameters for both vision and language tasks. We adopt the same CNN network as in and ResNet-18 network (He et al., 2016). We compare our proposed CA$^2$FL with the asynchronous FL baseline, FedBuff (without differential privacy) (Nguyen et al., 2022) and FedAsync (Xie et al., 2019) (constant), and with the synchronized FL method, FedAvg (McMahan et al., 2017). Due to the space limit, we leave additional experiments on more datasets and models together with the experiment details in Appendix A. Implementation overview of vision tasks. For experiments on CIFAR-10 and CIFAR-100, the number of local training iterations $K$ on each client is set to two local epochs (the amount of iteration depends on the amount of data for each client, and the batch size is set to 50 for all experiments by default). For local update, we use the SGD optimizer with a learning rate gridding from \{0.001, 0.01, 0.1, 1\} with momentum 0.9 and weight decay of $1e^{-4}$, and the global learning rate is gridding from \{0.1, 1.0, 2.0\} for all methods. We set a total of 100 clients in the network and the concurrency $M_c = 20$ if there is no further instructions, and we set the update accumulation amount $M = 10$ by default. Implementation overview of language tasks. For experiments on fine-tuning Bert-base model on the MRPC, SST-2, RTE and CoLA datasets from the GLUE benchmark, the number of local iterations $K$ on each client is one local epoch (the amount of iteration depends on the amount of data for each client, and the batch size is set to 32 for all experiments by default). We employ Dir (0.6) for non-i.i.d. data partitioned among clients. We adopt the low-rank adaptation (LoRA) (Hu et al., 2021) as the parameter-efficient fine-tuning method. Specifically, for a pre-trained weight matrix $W_0 \in \mathbb{R}^{d \times k}$, LoRA freezes $W_0$ but tuned the $\Delta W$ by representing with a low-rank decomposition with rank $r \ll \min(d,k)$, $W_0 + \Delta W = W_0 + BA$, where $B \in \mathbb{R}^{d \times r}$ and $A \in \mathbb{R}^{r \times k}$ are two trainable parameters. For all experiments, we choose $r = 1$ and $\alpha_{\text{LoRA}} = 1$. For local update, we use the widely-used AdamW optimizer with a learning rate gridding from \{5e$^{-5}$, 1e$^{-4}$, 5e$^{-4}$, 1e$^{-3}$, 5e$^{-3}$\} with weight decay of $1e^{-4}$, and the global learning rate is gridding from \{0.1, 1\} for all methods. We set a total of 10 clients in the network and the concurrency $M_c = 5$ if there are no further instructions, and we set the update accumulation amount $M = 3$ by default. 6.1 Main Results Table 1 shows the overall performance of training CIFAR-10 with a CNN model and the ResNet-18 model. We observe that the proposed CA$^2$FL shows improvement upon the FedBuff and FedAsync. Particularly, when training with the lightweight CNN model (with about 2.2M trainable parameters), the training loss of FedAsync is severely fluctuating and cannot converge when $\alpha = 0.1$, while our proposed CA$^2$FL are more robust to the highly heterogeneous settings and achieve better result than FedBuff. Table 2 presents the overall test accuracy of experiments on CIFAR-100 with two data heterogeneity levels. For $\alpha = 0.1$, our proposed CA$^2$FL achieves higher test accuracy compared to FedBuff but has lower accuracy than FedAsync. Specifically, when the data is highly heterogeneously distributed, e.g., $\alpha = 0.01$, our CA$^2$FL method significantly outperforms than FedBuff and FedAsync. For fine-tuning the Bert-base model on the GLUE benchmark, Table 3 presents the evaluation results for four datasets with several tasks. Note that the MRPC, SST-2, and RTE datasets are evaluated --- 5We also provide experimental results on fine-tuning TinyImageNet with two ResNet models, and parameter-efficient fine-tuning GPT-2 small model on E2E NLG Challenge. Table 1: The test accuracy of different models on the CIFAR-10 dataset with different models and data heterogeneity degrees. We report the mean accuracy and the standard derivation for the last 5 rounds. | Method | Dir(0.3) | Dir(0.1) | |----------|---------------------------|---------------------------| | | CNN Acc. & std | ResNet-18 Acc. & std | CNN Acc. & std | ResNet-18 Acc. & std | | FedAsync | 62.29 ± 0.16 | 79.8 ± 2.28 | - | 40.58 ± 2.92 | | FedBuff | 60.74 ± 1.18 | 78.53 ± 3.31 | 53.96 ± 0.10 | 63.03 ± 3.17 | | CA²FL | **64.40 ± 0.32** | **83.79 ± 0.34** | **57.62 ± 0.42** | **68.37 ± 1.97** | by the validation accuracy, while the CoLA dataset is evaluated by Matthew’s correlation. We observe that FedAsync achieves higher validation accuracy in MRPC, for other tasks and datasets, our proposed CA²FL obtains better evaluation results. Moreover, we plot the training loss w.r.t. the global rounds in Figure 1, and it verifies the theoretical convergence improvements of our proposed CA²FL. ![Training loss plots](image) (a) CIFAR-10 Dir(0.1) (b) CIFAR-10 Dir(0.3) (c) MRPC (d) SST-2 Figure 1: Training/fine-tuning loss on several models and datasets. Table 3: The result of Bert-base model on several language datasets with data heterogeneity degrees Dir(0.6). We report the mean evaluation metrics and the standard derivation for the last 5 rounds. | Method | MRPC Acc. & std | SST-2 Acc. & std | RTE Acc. & std | CoLA Acc. & std | |----------|-----------------|------------------|---------------|-----------------| | FedAsync | **82.86 ± 0.42** | 87.32 ± 3.76 | 62.09 ± 0.76 | 54.53 ± 1.52 | | FedBuff | 78.68 ± 0.41 | 86.06 ± 3.86 | 60.07 ± 1.09 | 55.57 ± 0.94 | | CA²FL | 79.26 ± 0.12 | **90.76 ± 1.02** | **65.63 ± 0.35** | **56.10 ± 0.25** | We also conduct a detailed comparison for studying the overall training/fine-tuning speedup for our proposed method to investigate the efficiency of our proposed method in Table 4. We simulate the wall-clock delay by assuming 80% of clients have normal local training processes, 10% have mild delays, and the last 10% have severe delays. Specifically, we observe that for each task, a client would finish the local training in $t_{\text{train}}$ seconds, then the simulated local training time for the normal clients would be $t_i \times t_{\text{train}}$, where $t_i \sim \text{Uniform}(0.5, 1) \times t_{\text{train}}$, for the mild delay clients, there is $t_i \times t_{\text{train}}$, where $t_i \sim \text{Uniform}(1, 2) \times t_{\text{train}}$, for severe delay clients, there is $t_i \sim \text{Uniform}(2, 3)$. We’ve provided an ablation study about different simulation settings in the Appendix. From Table 4 we observe Table 2: The test accuracy of different models on the CIFAR-100 dataset with different data heterogeneity degrees. We report the mean accuracy and the standard derivation for the last 5 rounds. | Method | Dir(0.1) Acc. & std | Dir(0.01) Acc. & std | |----------|---------------------|----------------------| | FedAsync | **62.91 ± 1.67** | - | | FedBuff | 57.12 ± 0.60 | 32.49 ± 1.31 | | CA²FL | 59.50 ± 0.24 | **37.30 ± 0.26** | that our proposed CA$^2$FL maintains better training/fine-tuning efficiency among all asynchronous methods. CA$^2$FL also shows the advantage of asynchronous learning in image classification tasks. However, the efficiency of CA$^2$FL is still challenged compared to synchronized FL in the case of fine-tuning pre-trained models. We would like to take this phenomenon as a conclusion of future work for further research. Table 4: Training/fine-tuning time simulation (in units of 10 seconds) to reach target validation accuracy (Matthew’s correlation for CoLA). For each dataset, the concurrency is fixed for fair comparison. **Bold** represents the best evaluation results and the _underline_ represents the best results for asynchronous FL. | | Acc. | FedAsync | FedBuff | CA$^2$FL | FedAvg | |-------|------|----------|---------|----------|--------| | CIFAR-10 | 80% | 268.80 | 291.53 | **214.16** | 388.64 | | CIFAR-100 | 55% | 333.47 | 295.49 | **233.49** | 476.78 | | MRPC | 80% | 2549.54 | 403.95 | **87.39** | 97.71 | | SST-2 | 90% | 2853.5 | 2079.35 | 648.71 | **572.01** | | RTE | 63% | 815.94 | 420.83 | **79.61** | 95.17 | | CoLA | 55% | 217.23 | 144.64 | **34.75** | **0.79** | Figure 2: Test accuracy of ablation studies for FedBuff and CA$^2$FL in training CIFAR-10 on ResNet-18 model. We conduct ablation studies to investigate the effect of the effect of data heterogeneity, the delay simulation strategies, and the relationship between the concurrency and the buffer size $M$. Due to constraints on space, we leave detailed ablation results and discussions in Appendix A. From Figure 3 plots (a) and (b), we can observe the impact of data heterogeneity for FedBuff and CA$^2$FL. They show that the CA$^2$FL is overall less sensitive to data heterogeneity than FedBuff with less fluctuation. Plot (c) shows the the ablation study for the concurrency $M_c$ for fixed buffer $M = 10$, it shows that the accuracy decreases as the concurrency increases, with the same buffer $M = 10$. Plot (d) shows the impact of the buffer $M$ with fixed concurrency $M_c = 20$. We observe that as the increase of buffer size $M$, the overall performance increases w.r.t. the global round $T$. 7 CONCLUSIONS In this paper, we first investigate the convergence of FedBuff under non-convex heterogeneous data distribution settings and we show that the data heterogeneity amplifies the negative impact of asynchronous delay which slows down the convergence of asynchronous federated learning. To address this convergence degradation issue, we propose a novel asynchronous federated learning method, CA$^2$FL, which involves caching and reusing previous updates for global calibration. We provide theoretical analysis under non-convex stochastic settings that demonstrate the significant convergence improvement of our proposed CA$^2$FL. Empirical results demonstrate the superior performance of the proposed CA$^2$FL compared to general asynchronous federated learning, and it also shows that the proposed MF-CA$^2$FL could largely save the memory overhead while maintaining the superior performance benefits from the cached update. REFERENCES Durmus Alp Emre Acar, Yue Zhao, Ramon Matas, Matthew Mattina, Paul Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=B7v4QMR6Z9w. Dmitrii Avdiukhin and Shiva Kasiviswanathan. Federated learning under arbitrary communication patterns. In *Proceedings of the 38th International Conference on Machine Learning*, pp. 425–435, 2021. Marco Bornstein, Tahseen Rabbani, Evan Z Wang, Amrit Bedi, and Furong Huang. SWIFT: Rapid decentralized federated learning via wait-free model communication. In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=jh1nCir1R3d. Ashok Cutkosky and Francesco Orabona. Momentum-based variance reduction in non-convex sgd. *Advances in neural information processing systems*, 32, 2019. Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. *Advances in neural information processing systems*, 27, 2014. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Liang Gao, Huazhu Fu, Li Li, Yingwen Chen, Ming Xu, and Cheng-Zhong Xu. Feddc: Federated learning with non-iid data via local drift decoupling and correction. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 10112–10121, 2022. Margalit R Glasgow and Mary Wootters. Asynchronous distributed optimization with stochastic delays. In *International Conference on Artificial Intelligence and Statistics*, pp. 9247–9279. PMLR, 2022. Xinran Gu, Kaixuan Huang, Jingzhao Zhang, and Longbo Huang. Fast federated learning in the presence of arbitrary device unavailability. *Advances in Neural Information Processing Systems*, 34:12052–12064, 2021. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. *arXiv preprint arXiv:2106.09685*, 2021. Divyansh Jhunjhunwala, Pranay Sharma, Aushim Nagarkatti, and Gauri Joshi. Fedvarp: Tackling the variance due to partial client participation in federated learning. In *Uncertainty in Artificial Intelligence*, pp. 906–916. PMLR, 2022. Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank J Reddi, Sebastian U Stich, and Ananda Theertha Suresh. Mime: Mimicking centralized stochastic algorithms in federated learning. *arXiv preprint arXiv:2008.03606*, 2020a. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In *International Conference on Machine Learning*, pp. 5132–5143. PMLR, 2020b. Prashant Khanduri, Pranay Sharma, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat, and Pramod Varshney. Stem: A stochastic two-sided momentum algorithm achieving near-optimal sample and communication complexities for federated learning. *Advances in Neural Information Processing Systems*, 34:6050–6061, 2021.
N0I2RtD8je
Additionally, to what degree can the same effects of the goal-baseline regularization be incorporated in the goal prompt? A more detailed goal prompt can just specify to ignore the irrelevant information.
Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning Juan Rocamonde†‡ FAR AI Victoriano Montesinos Vertebra Elvis Nava ETH AI Center Ethan Perez* Anthropic David Lindner*‡ ETH Zurich Figure 1: We use CLIP as a reward model to train a MuJoCo humanoid robot to (1) stand with raised arms, (2) sit in a lotus position, (3) do the splits, and (4) kneel on the ground (from left to right). We specify each task using a single sentence text prompt. The prompts are simple (e.g., “a humanoid robot kneeling”) and none of these tasks required prompt engineering. See Section 4.3 for details on our experimental setup. Abstract Reinforcement learning (RL) requires either manually specifying a reward function, which is often infeasible, or learning a reward model from a large amount of human feedback, which is often very expensive. We study a more sample-efficient alternative: using pretrained vision-language models (VLMs) as zero-shot reward models (RMs) to specify tasks via natural language. We propose a natural and general approach to using VLMs as reward models, which we call VLM-RMs. We use VLM-RMs based on CLIP to train a MuJoCo humanoid to learn complex tasks without a manually specified reward function, such as kneeling, doing the splits, and sitting in a lotus position. For each of these tasks, we only provide a single sentence text prompt describing the desired task with minimal prompt engineering. We provide videos of the trained agents at: https://sites.google.com/view/vlm-rm/. We can improve performance by providing a second “baseline” prompt and projecting out parts of the CLIP embedding space irrelevant to distinguish between goal and baseline. Further, we find a strong scaling effect for VLM-RMs: larger VLMs trained with more compute and data are better reward models. The failure modes of VLM-RMs we encountered are all related to known capability limitations of current VLMs, such as limited spatial reasoning ability or visually unrealistic environments that are far off-distribution for the VLM. We find that VLM-RMs are remarkably robust as long as the VLM is large enough. This suggests that future VLMs will become more and more useful reward models for a wide range of RL applications. †Additional affiliation: Vertebra ‡Correspondence to: juancarlosrocamonde@gmail.com, david.lindner@inf.ethz.ch *Equal contribution Source code available at https://github.com/AlignmentResearch/vlmrm 1 INTRODUCTION Training reinforcement learning (RL) agents to perform complex tasks in vision-based domains can be difficult, due to high costs associated with reward specification. Manually specifying reward functions for real world tasks is often infeasible, and learning a reward model from human feedback is typically expensive. To make RL more useful in practical applications, it is critical to find a more sample-efficient and natural way to specify reward functions. One natural approach is to use pretrained vision-language models (VLMs), such as CLIP (Radford et al., 2021) and Flamingo (Alayrac et al., 2022), to provide reward signals based on natural language. However, prior attempts to use VLMs to provide rewards require extensive fine-tuning VLMs (e.g., Du et al., 2023) or complex ad-hoc procedures to extract rewards from VLMs (e.g., Mahmoudieh et al., 2022). In this work, we demonstrate that simple techniques for using VLMs as zero-shot language-grounded reward models work well, as long as the chosen underlying model is sufficiently capable. Concretely, we make four key contributions. First, we propose VLM-RM, a general method for using pre-trained VLMs as a reward model for vision-based RL tasks (Section 3). We propose a concrete implementation that uses CLIP as a VLM and cos-similarity between the CLIP embedding of the current environment state and a simple language prompt as a reward function. We can optionally regularize the reward model by providing a “baseline prompt” that describes a neutral state of the environment and partially projecting the representations onto the direction between baseline and target prompts when computing the reward. Second, we validate our method in the standard CartPole and MountainCar RL benchmarks (Section 4.2). We observe high correlation between VLM-RMs and the ground truth rewards of the environments and successfully train policies to solve the tasks using CLIP as a reward model. Furthermore, we find that the quality of CLIP as a reward model improves if we render the environment using more realistic textures. Third, we train a MuJoCo humanoid to learn complex tasks, including raising its arms, sitting in a lotus position, doing the splits, and kneeling (Figure 1; Section 4.3) using a CLIP reward model derived from single sentence text prompts (e.g., “a humanoid robot kneeling”). Fourth, we study how VLM-RMs’ performance scales with the size of the VLM, and find that VLM scale is strongly correlated to VLM-RM quality (Section 4.4). In particular, we can only learn the humanoid tasks in Figure 1 with the largest publicly available CLIP model. Our results indicate that VLMs are powerful zero-shot reward models. While current models, such as CLIP, have important limitations that persist when used as VLM-RMs, we expect such limitations to mostly be overcome as larger and more capable VLMs become available. Overall, VLM-RMs are likely to enable us to train models to perform increasingly sophisticated tasks from human-written task descriptions. 2 BACKGROUND Partially observable Markov decision processes. We formulate the problem of training RL agents in vision-based tasks as a partially observable Markov decision process (POMDP). A POMDP is a tuple \((S, A, \theta, R, O, \phi, \gamma, d_0)\) where: \(S\) is the state space; \(A\) is the action space; \(\theta(s'|s, a) : S \times S \times A \rightarrow [0, 1]\) is the transition function; \(R(s, a, s') : S \times A \times S \rightarrow \mathbb{R}\) is the reward function; \(O\) is the observation space; \(\phi(o|s) : S \rightarrow \Delta(O)\) is the observation distribution; and \(d_0(s) : S \rightarrow [0, 1]\) is the initial state distribution. At each point in time, the environment is in a state \(s \in S\). In each timestep, the agent takes an action \(a \in A\), causing the environment to transition to state \(s'\) with probability \(\theta(s'|s, a)\). The agent then receives an observation \(o\), with probability \(\phi(o|s')\) and a reward \(r = R(s, a, s')\). A sequence of states and actions is called a trajectory \(\tau = (s_0, a_0, s_1, a_1, \ldots)\), where \(s_i \in S\), and \(a_i \in A\). The returns of such a trajectory \(\tau\) are the discounted sum of rewards \(g(\tau; R) = \sum_{t=0}^{\infty} \gamma^t R(s_t, a_t, s_{t+1})\). The agent’s goal is to find a (possibly stochastic) policy \(\pi(s|a)\) that maximizes the expected returns \(G(\pi) = \mathbb{E}_{\tau(\pi)}[g(\tau(\pi); R)]\). We only consider finite-horizon trajectories, i.e., \(|\tau| < \infty\). Vision-language models. We broadly define vision-language models (VLMs; Zhang et al., 2023) as models capable of processing sequences of both language inputs \( l \in L^{\leq n} \) and vision inputs \( i \in I^{\leq m} \). Here, \( L \) is a finite alphabet and \( L^{\leq n} \) contains strings of length less than or equal to \( n \), whereas \( I \) is the space of 2D RGB images and \( I^{\leq m} \) contains sequences of images with length less than or equal to \( m \). CLIP models. One popular class of VLMs are Contrastive Language-Image Pretraining (CLIP; Radford et al., 2021) encoders. CLIP models consist of a language encoder \( \text{CLIP}_L : L^{\leq n} \rightarrow V \) and an image encoder \( \text{CLIP}_I : I \rightarrow V \) mapping into the same latent space \( V = \mathbb{R}^k \). These encoders are jointly trained via contrastive learning over pairs of images and captions. Commonly CLIP encoders are trained to minimize the cosine distance between embeddings for semantically matching pairs and maximize the cosine distance between semantically non-matching pairs. 3 Vision-Language Models as Reward Models (VLM-RMs) This section presents how we can use VLMs as a learning-free (zero-shot) way to specify rewards from natural language descriptions of tasks. Importantly, VLM-RMs avoid manually engineering a reward function or collecting expensive data for learning a reward model. 3.1 Using Vision-Language Models as Rewards Let us consider a POMDP without a reward function \((S, A, \theta, O, \phi, \gamma, d_0)\). We focus on vision-based RL where the observations \( o \in O \) are images. For simplicity, we assume a deterministic observation distribution \( \phi(o|s) \) defined by a mapping \( \psi(s) : S \rightarrow O \) from states to image observation. We want the agent to perform a task \( T \) based on a natural language description \( l \in L^{\leq n} \). For example, when controlling a humanoid robot (Section 4.3) \( T \) might be the robot kneeling on the ground and \( l \) might be the string “a humanoid robot kneeling”. To train the agent using RL, we need to first design a reward function. We propose to use a VLM to provide the reward \( R(s) \) as: \[ R_{\text{VLM}}(s) = \text{VLM}(l, \psi(s), c), \] where \( c \in L^{\leq n} \) is an optional context, e.g., for defining the reward interactively with a VLM. This formulation is general enough to encompass the use of several different kinds of VLMs, including image and video encoders, as reward models. CLIP as a reward model. In our experiments, we chose a CLIP encoder as the VLM. A very basic way to use CLIP to define a reward function is to use cosine similarity between a state’s image representation and the natural language task description: \[ R_{\text{CLIP}}(s) = \frac{\text{CLIP}_L(l) \cdot \text{CLIP}_I(\psi(s))}{\|\text{CLIP}_L(l)\| \cdot \|\text{CLIP}_I(\psi(s))\|}, \] In this case, we do not require a context \( c \). We will sometimes call the CLIP image encoder a state encoder, as it encodes an image that is a direct function of the POMDP state, and the CLIP language encoder a task encoder, as it encodes the language description of the task. 3.2 Goal-Baseline Regularization to Improve CLIP Reward Models While in the previous section, we introduced a very basic way of using CLIP to define a task-based reward function, this section proposes Goal-Baseline Regularization as a way to improve the quality of the reward by projecting out irrelevant information about the observation. So far, we assumed we only have a task description \( l \in L^{\leq n} \). To apply goal-baseline regularization, we require a second “baseline” description \( b \in L^{\leq n} \). The baseline \( b \) is a natural language description of the environment setting in its default state, irrespective of the goal. For example, our baseline description for the humanoid is simply “a humanoid robot,” whereas the task description is, e.g., “a humanoid robot kneeling.” We obtain the goal-baseline regularized CLIP reward model \((R_{\text{CLIP-Reg}})\) by projecting our state embedding onto the line spanned by the baseline and task embeddings. Definition 1 (Goal-Baseline Regularization). Given a goal task description \( l \) and baseline description \( b \), let \( g = \frac{\text{CLIP}_L(l)}{\|\text{CLIP}_L(l)\|} \), \( b = \frac{\text{CLIP}_L(b)}{\|\text{CLIP}_L(b)\|} \), \( s = \frac{\text{CLIP}_I(\psi(s))}{\|\text{CLIP}_I(\psi(s))\|} \) be the normalized encodings, and \( L \) be the line spanned by \( b \) and \( g \). The goal-baseline regularized reward function is given by \[ R_{\text{CLIP-Reg}}(s) = 1 - \frac{1}{2}\|\alpha \text{proj}_L s + (1 - \alpha)s - g\|^2_2, \] where \( \alpha \) is a parameter to control the regularization strength. In particular, for \( \alpha = 0 \), we recover our initial CLIP reward function \( R_{\text{CLIP}} \). On the other hand, for \( \alpha = 1 \), the projection removes all components of \( s \) orthogonal to \( g - b \). Intuitively, the direction from \( b \) to \( g \) captures the change from the environment’s baseline to the target state. By projecting the reward onto this direction, we directionally remove irrelevant parts of the CLIP representation. However, we can not be sure that the direction really captures all relevant information. Therefore, instead of using \( \alpha = 1 \), we treat it as a hyperparameter. However, we find the method to be relatively robust to changes in \( \alpha \) with most intermediate values being better than 0 or 1. 3.3 RL with CLIP Reward Model We can now use VLM-RMs as a drop-in replacement for the reward signal in RL. In our implementation, we use the Deep Q-Network (DQN; Mnih et al., 2015) or Soft Actor-Critic (SAC; Haarnoja et al., 2018) RL algorithms. Whenever we interact with the environment, we store the observations in a replay buffer. In regular intervals, we pass a batch of observations from the replay buffer through a CLIP encoder to obtain the corresponding state embeddings. We can then compute the reward function as cosine similarity between the state embeddings and the task embedding which we only need to compute once. Once we have computed the reward for a batch of interactions, we can use them to perform the standard RL algorithm updates. Appendix C contains more implementation details and pseudocode for our full algorithm in the case of SAC. 4 Experiments We conduct a variety of experiments to evaluate CLIP as a reward model with and without goal-baseline regularization. We start with simple control tasks that are popular RL benchmarks: CartPole and MountainCar (Section 4.2). These environments have a ground truth reward function and a simple, well-structured state space. We find that our reward models are highly correlated with the ground truth reward function, with this correlation being greatest when applying goal-baseline regularization. Furthermore, we find that the reward model’s outputs can be significantly improved by making a simple modification to make the environment’s observation function more realistic, e.g., by rendering the mountain car over a mountain texture. We then move on to our main experiment: controlling a simulated humanoid robot (Section 4.3). We use CLIP reward models to specify tasks from short language prompts; several of these tasks are challenging to specify manually. We find that these zero-shot CLIP reward models are sufficient for RL algorithms to learn most tasks we attempted with little to no prompt engineering or hyperparameter tuning. Finally, we study the scaling properties of the reward models by using CLIP models of different sizes as reward models in the humanoid environment (Section 4.4). We find that larger CLIP models are significantly better reward models. In particular, we can only successfully learn the tasks presented in Figure 1 when using the largest publicly available CLIP model. Experiment setup. We extend the implementation of the DQN and SAC algorithm from the stable-baselines3 library (Raffin et al., 2021) to compute rewards from CLIP reward models instead of from the environment. As shown in Algorithm 1 for SAC, we alternate between environment steps, computing the CLIP reward, and RL algorithm updates. We run the RL algorithm updates on a single NVIDIA RTX A6000 GPU. The environment simulation runs on CPU, but we perform rendering and CLIP inference distributed over 4 NVIDIA RTX A6000 GPUs. We provide the code to reproduce our experiments in the supplementary material. We discuss hyperparameter choices in Appendix C, but we mostly use standard parameters from Appendix C also contains a table with a full list of prompts for our experiments, including both goal and baseline prompts when using goal-baseline regularization. 4.1 How can we Evaluate VLM-RMs? Evaluating reward models can be difficult, particularly for tasks for which we do not have a ground truth reward function. In our experiments, we use 3 types of evaluation: (i) evaluating policies using ground truth reward; (ii) comparing reward functions using EPIC distance; (iii) human evaluation. Evaluating policies using ground truth reward. If we have a ground truth reward function for a task such as for the CartPole and MountainCar, we can use it to evaluate policies. For example, we can train a policy using a VLM-RM and evaluate it using the ground truth reward. This is the most popular way to evaluate reward models in the literature and we use it for environments where we have a ground-truth reward available. Comparing reward functions using EPIC distance. The “Equivalent Policy-Invariant Comparison” (EPIC; Gleave et al., 2021) distance compares two reward functions without requiring the expensive policy training step. EPIC distance is provably invariant on the equivalence class of reward functions that induce the same optimal policy. We consider only goal-based tasks, for which the EPIC is distance particularly easy to compute. In particular, a low EPIC distance between the CLIP reward model and the ground truth reward implies that the CLIP reward model successfully separates goal states from non-goal states. Appendix A discusses in more detail how we compute the EPIC distance in our case, and how we can intuitively interpret it for goal-based tasks. Human evaluation. For tasks without a ground truth reward function, such as all humanoid tasks in Figure 1, we need to perform human evaluations to decide whether our agent is successful. We define “success rate” as the percentage of trajectories in which the agent successfully performs the task in at least 50% of the timesteps. For each trajectory, we have a single rater\(^2\) label how many timesteps were spent successfully performing the goal task, and use this to compute the success rate. However, human evaluations can also be expensive, particularly if we want to evaluate many different policies, e.g., to perform ablations. For such cases, we additionally collect a dataset of human-labelled states for each task, including goal states and non-goal states. We can then compute the EPIC distance with these binary human labels. Empirically, we find this to be a useful proxy for the reward model quality which correlates well with the performance of a policy trained using the reward model. For more details on our human evaluation protocol, we refer to Appendix B. Our human evaluation protocol is very basic and might be biased. Therefore, we additionally provide videos of our trained agents at https://sites.google.com/view/vlm-rm. 4.2 Can VLM-RMs Solve Classic Control Benchmarks? As an initial validation of our methods, we consider two classic control environments: CartPole and MountainCar, implemented in OpenAI Gym (Brockman et al., 2016). In addition to the default MountainCar environment, we also consider a version with a modified rendering method that adds textures to the mountain and the car so that it resembles the setting of “a car at the peak of a mountain” more closely (see Figure 2). This environment allows us to test whether VLM-RMs work better in visually “more realistic” environments. To understand the rewards our CLIP reward models provide, we first analyse plots of their reward landscape. In order to obtain a simple and interpretable visualization figure, we plot CLIP rewards against a one-dimensional state space parameter, that is directly related to the completion of the task. For the CartPole (Figure 2a) we plot CLIP rewards against the angle of the pole, where the ideal position is at angle 0. For the (untextured and textured) MountainCar environments Figures 2b and 2c, we plot CLIP rewards against the position of the car along the horizontal axis, with the goal location being around \(x = 0.5\). \(^2\)One of the authors. Figure 2: We study the CLIP reward landscape in two classic control environments: CartPole and MountainCar. We plot the CLIP reward as a function of the pole angle for the CartPole (a) and as a function of the x position for the MountainCar (b,c). We mark the respective goal states with a vertical line. The line color encodes different regularization strengths $\alpha$. For the CartPole, the maximum reward is always when balancing the pole and the regularization has little effect. For the MountainCar, the agent obtains the maximum reward on top of the mountain. But, the reward landscape is much more well-behaved when the environment has textures and we add goal-baseline regularization – this is consistent with our results when training policies. Figure 2a shows that CLIP rewards are well-shaped around the goal state for the CartPole environment, whereas Figure 2b shows that CLIP rewards for the default MountainCar environment are poorly shaped, and might be difficult to learn from, despite still having roughly the right maximum. We conjecture that zero-shot VLM-based rewards work better in environments that are more “photorealistic” because they are closer to the training distribution of the underlying VLM. Figure 2c shows that if, as described earlier, we apply custom textures to the MountainCar environment, the CLIP rewards become well-shaped when used in concert with the goal-baseline regularization technique. For larger regularization strength $\alpha$, the reward shape resembles the slope of the hill from the environment itself – an encouraging result. We then train agents using the CLIP rewards and goal-baseline regularization in all three environments, and achieve 100% task success rate in both environments (CartPole and textured MountainCar) for most $\alpha$ regularization strengths. Without the custom textures, we are not able to successfully train an agent on the mountain car task, which supports our hypothesis that the environment visualization is too abstract. The results show that both and regularized CLIP rewards are effective in the toy RL task domain, with the important caveat that CLIP rewards are only meaningful and well-shaped for environments that are photorealistic enough for the CLIP visual encoder to interpret correctly. 4.3 Can VLM-RMs Learn Complex, Novel Tasks in a Humanoid Robot? Our primary goal in using VLM-RMs is to learn tasks for which it is difficult to specify a reward function manually. To study such tasks, we consider the Humanoid-v4 environment implemented in the MuJoCo simulator (Todorov et al., 2012). The standard task in this environment is for the humanoid robot to stand up. For this task, the environment provides a reward function based on the vertical position of the robot’s center of mass. We consider a range of additional tasks for which no ground truth reward function is available, including kneeling, sitting in a lotus position, and doing the splits. For a full list of tasks we tested, see Table 1. Appendix C presents more detailed task descriptions and the full prompts we used. Table 1: We successfully learned 5 out of 8 tasks we tried for the humanoid robot (cf. Figure 1). For each task, we evaluate the checkpoint with the highest CLIP reward over 4 random seeds. We show a human evaluator 100 trajectories from the agent and ask them to label how many timesteps were spent successfully performing the goal task. Then, we label an episode as a success if the agent is in the goal state at least 50% of the timesteps. The success rate is the fraction of trajectories labelled as successful. We provide more details on the evaluation as well as more fine-grained human labels in Appendix B and videos of the agents’ performance at https://sites.google.com/view/vlm-rm. | Task | Success Rate | |--------------------|--------------| | Kneeling | 100% | | Lotus position | 100% | | Standing up | 100% | | Arms raised | 100% | | Doing splits | 100% | | Hands on hips | 64% | | Standing on one leg| 0% | | Arms crossed | 0% | Figure 3: We test the effect of our modifications to the standard Humanoid-v4 environment on the kneeling task. We compare the original environment (a) to modifying the textures (b) and the camera angle (c). We find that modifying the textures to be more realistic is crucial to making the CLIP reward model work. Moving the camera to give a better view of the humanoid helps too, but is less critical in this task. We make two modifications to the default Humanoid-v4 environment to make it better suited for our experiments. (1) We change the colors of the humanoid texture and the environment background to be more realistic (based on our results in Section 4.2 that suggest this should improve the CLIP encoder). (2) We move the camera to a fixed position pointing at the agent slightly angled down because the original camera position that moves with the agent can make some of our tasks impossible to evaluate. We ablate these changes in Figure 3, finding the texture change is critical and repositioning the camera provides a modest improvement. Table 1 shows the human-evaluated success rate for all tasks we tested. We solve 5 out of 8 tasks we tried with minimal prompt engineering and tuning. For the remaining 3 tasks, we did not get major performance improvements with additional prompt engineering and hyperparameter tuning, and we hypothesize these failures are related to capability limitations in the CLIP model we use. We invite the reader to evaluate the performance of the trained agents themselves by viewing videos at https://sites.google.com/view/vlm-rm. The three tasks that the agent does not obtain perfect performance for are “hands on hips”, “standing on one leg”, and “arms crossed”. We hypothesize that “standing on one leg” is very hard to learn or might even be impossible in the MuJoCo physics simulation because the humanoid’s feet are round. The goal state for “hands on hips” and “arms crossed” is visually similar to a humanoid standing and we conjecture the current generation of CLIP models are unable to discriminate between such subtle differences in body pose. While the experiments in Table 1 use no goal-baseline regularization (i.e., $\alpha = 0$), we separately evaluate goal-baseline regularization for the kneeling task. Figure 4a shows that $\alpha \neq 0$ improves the reward model’s EPIC distance to human labels, suggesting that it would also improve performance on the final task, we might need a more fine-grained evaluation criterion to see that. Figure 4: VLMs become better reward models with VLM model scale. We evaluate the humanoid kneeling task for different VLM model sizes. We evaluate the EPIC distance between the CLIP rewards and human labels (a and b) and the human-evaluated success rate of an agent trained using differently sized CLIP reward models (c). We see a strong positive effect of model scale on VLM-RM quality. In particular, (c) shows we are only able to learn the kneeling task using the largest CLIP model publically available, whereas (b) shows there is a smooth improvement in EPIC distance compared to human labels. (a) shows that goal-baseline regularization improves the reward model across model sizes but it is more impactful for small models. 4.4 How do VLM-RMs Scale with VLM Model Size? Finally, we investigate the effect of the scale of the pre-trained VLM on its quality as a reward model. We focus on the “kneeling” task and consider 4 different large CLIP models: the original CLIP RN50 (Radford et al., 2021), and the ViT-L-14, ViT-H-14, and ViT-bigG-14 from OpenCLIP (Cherti et al., 2023) trained on the LAION-5B dataset (Schuhmann et al., 2022). In Figure 4a we evaluate the EPIC distance to human labels of CLIP reward models for the four model scales and different values of $\alpha$, and we evaluate the success rate of agents trained using the four models. The results clearly show that VLM model scale is a key factor in obtaining good reward models. We detect a clear positive trend between model scale, and the EPIC distance of the reward model from human labels. On the models we evaluate, we find the EPIC distance to human labels is close to log-linear in the size of the CLIP model (Figure 4b). This improvement in EPIC distance translates into an improvement in success rate. In particular, we observe a sharp phase transition between the ViT-H-14 and ViT-bigG-14 CLIP models: we can only learn the kneeling task successfully when using the ViT-bigG-14 model and obtain 0% success rate for all smaller models (Figure 4c). Notably, the reward model improves smoothly and predictably with model scale as measured by EPIC distance. However, predicting the exact point where the RL agent can successfully learn the task is difficult. This is a common pattern in evaluating large foundation models, as observed by Ganguli et al. (2022). 5 Related Work Foundation models (Bommasani et al., 2021) trained on large scale data can learn remarkably general and transferable representations of images, language, and other kinds of data, which makes them useful for a large variety of downstream tasks. For example, pre-trained vision-language encoders, such as CLIP (Radford et al., 2021), have been used far beyond their original scope, e.g., for image generation (Ramesh et al., 2022; Patashnik et al., 2021; Nichol et al., 2021), robot control (Shridhar et al., 2022; Khandelwal et al., 2022), or story evaluation (Matiana et al., 2021). Reinforcement learning from human feedback (RLHF; Christiano et al., 2017) is a critical step in making foundation models more useful (Ouyang et al., 2022). However, collecting human feedback is expensive. Therefore, using pre-trained foundation models themselves to obtain reward signals for RL finetuning has recently emerged as a key paradigm in work on large language models (Bai et al., 2022). Some approaches only require a small amount of natural language feedback instead of a whole dataset of human preferences (Scheurer et al., 2022; 2023; Chen et al., 2023). However, similar techniques have yet to be adopted by the broader RL community. While some work uses language models to compute a reward function from a structured environment representation (Xie et al., 2023; Ma et al., 2023), many RL tasks are visual and require using VLMs instead. Sumers et al. (2023) use generative VLMs to relabel the goal of agent trajectories for hindsight experience replay, but not for specifying rewards. Cui et al. (2022) use CLIP to provide rewards for robotic manipulation tasks given a goal image. However, they only show limited success when using natural language descriptions to define goals, which is the focus of our work. Mahmoudieh et al. (2022) are the first to successfully use CLIP encoders as a reward model conditioned on language task descriptions in robotic manipulation tasks. However, to achieve this, the authors need to explicitly fine-tune the CLIP image encoder on a carefully crafted dataset for a robotics task. Instead, we focus on leveraging CLIP’s zero-shot ability to specify reward functions, which is significantly more sample-efficient and practical. Fan et al. (2022) train a CLIP model to provide a reward signal in Minecraft environments. But, that approach requires a lot of labeled, environment-specific data. Du et al. (2023) finetune a Flamingo VLM (Alayrac et al., 2022) to act as a “success detector” for vision-based RL tasks. However, they do not train RL policies using these success detectors, leaving open the question of how robust they are under optimization pressure. Concurrently to our work, Sontakke et al. (2023) successfully use a VLM to provide reward signals for RL agents in robotics settings. However, they focus on specifying the reward with video demonstrations and only show basic results with natural language task descriptions. In contrast to these works, we do not require any finetuning to use CLIP as a reward model, and we successfully train RL policies to achieve a range of complex tasks that do not have an easily-specified ground truth reward function. 6 CONCLUSION We introduced a method to use vision-language models (VLMs) as reward models for reinforcement learning (RL), and implemented it using CLIP as a reward model and standard RL algorithms. We used VLM-RMs to solve classic RL benchmarks and to learn to perform complicated tasks using a simulated humanoid robot. We observed a strong scaling trend with model size, which suggests that future VLMs are likely to be useful as reward models in an even broader range of tasks. Limitations. Fundamentally, our approach relies on the reward model generalizing from a text description to a reward function that captures what a human intends the agent to do. Although the concrete failure cases we observed are likely specific to the CLIP models we used and may be solved by more capable models, some problems will persist. The resulting reward model will be misspecified if the text description does not contain enough information about what the human intends or the VLM generalizes poorly. While we expect future VLMs to generalize better, the risk of the reward model being misspecified grows for more complex tasks, that are difficult to specify in a single language prompt. Therefore, when using VLM-RMs in practice it will be crucial to use independent monitoring to ensure agents trained from automated feedback act as intended. For complex tasks, it will be prudent to use a multi-step reward specification, e.g., by using a VLM capable of having a dialogue with the user about specifying the task. Future Work. There are many possible extensions of our approach that may improve performance but were not necessary in our tasks. For example, finetuning VLMs for specific environments is a natural next step to make them more useful as reward models. To move beyond goal-based supervision, future VLM-RMs could encode videos instead of images. To move towards specifying more complex tasks, future VLM-RMs could use dialogue-enabled VLMs. For practical applications, it will be important to ensure robustness and safety of the reward model. Our work can serve as a basis for studying the safety implications of VLM-RMs. For instance, future work could investigate the robustness of VLM-RMs against optimization pressure by RL agents. More broadly, we believe VLM-RMs open up exciting avenues for future research to build useful agents on top of pre-trained models, such as building language model agents and real world robotic controllers for tasks where we do not have a reward function available. AUTHOR CONTRIBUTIONS Juan Rocamonde designed and implemented the experimental infrastructure, ran most experiments, analyzed results, and wrote large parts of the paper. Victoriano Montesinos implemented parallelized rendering and training to enable using larger CLIP models, implemented and ran many experiments, and performed the human evaluations. Elvis Nava advised on experiment design, implemented and ran some of the experiments, and wrote large parts of the paper. Ethan Perez proposed the original project and advised on research direction and experiment design. David Lindner implemented and ran early experiments with the humanoid robot, wrote large parts of the paper, and led the project. ACKNOWLEDGMENTS We thank Adam Gleave for valuable discussions throughout the project and detailed feedback on early drafts, Jérémy Scheurer and Nora Belrose for helpful feedback early on, Adrià Garriga-Alonso for help with running experiments, and Xander Balwit for help with editing the paper. We are grateful for funding received by Open Philanthropy, Manifund, the ETH AI Center, Swiss National Science Foundation (B.F.G. CRSII5-173721 and 315230 189251), ETH project funding (B.F.G. ETH-20 19-01), and the Human Frontiers Science Program (RGY0072/2019). REFERENCES Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. Flamingo: A visual language model for few-shot learning. In *Advances in Neural Information Processing Systems*, 2022. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional AI: Harmlessness from AI feedback. *arXiv preprint arXiv:2212.08073*, 2022. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. *arXiv preprint arXiv:2108.07258*, 2021. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gy. *arXiv preprint arXiv:1606.01540*, 2016. Angelica Chen, Jérémy Scheurer, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R. Bowman, Kyunghyun Cho, and Ethan Perez. Improving code generation by training with natural language feedback, 2023. Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scaling laws for contrastive language-image learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 2818–2829, 2023. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. *Advances in Neural Information Processing Systems*, 2017. Yuchen Cui, Scott Niekum, Abhinav Gupta, Vikash Kumar, and Aravind Rajeswaran. Can foundation models perform zero-shot task specification for robot manipulation? In *Learning for Dynamics and Control Conference*, 2022. Yuqing Du, Ksenia Konyushkova, Misha Denil, Akhil Raju, Jessica Landon, Felix Hill, Nando de Freitas, and Serkan Cabi. Vision-language models as success detectors. *arXiv preprint arXiv:2303.07280*, 2023.
XEFWBxi075
This paper lacks further analysis for instance-wise weighting. Because the final results are weighted by Softmax, the prediction of each tree is not separate now. If we cut off one tree, the contributions of the other tree are also changed. This is different from XGB and NODE, but the authors did not point out it.
GRANDE: Gradient-Based Decision Tree Ensembles for Tabular Data Sascha Marton University of Mannheim, Germany sascha.marton@uni-mannheim.de Stefan Lüdtke University of Rostock, Germany stefan.luedtke@uni-rostock.de Christian Bartelt University of Mannheim, Germany christian.bartelt@uni-mannheim.de Heiner Stuckenschmidt University of Mannheim, Germany heiner.stuckenschmidt@uni-mannheim.de ABSTRACT Despite the success of deep learning for text and image data, tree-based ensemble models are still state-of-the-art for machine learning with heterogeneous tabular data. However, there is a significant need for tabular-specific gradient-based methods due to their high flexibility. In this paper, we propose GRANDE, GRAdieNt-Based Decision Tree Ensembles, a novel approach for learning hard, axis-aligned decision tree ensembles using end-to-end gradient descent. GRANDE is based on a dense representation of tree ensembles, which affords to use backpropagation with a straight-through operator to jointly optimize all model parameters. Our method combines axis-aligned splits, which is a useful inductive bias for tabular data, with the flexibility of gradient-based optimization. Furthermore, we introduce an advanced instance-wise weighting that facilitates learning representations for both, simple and complex relations, within a single model. We conducted an extensive evaluation on a predefined benchmark with 19 classification datasets and demonstrate that our method outperforms existing gradient-boosting and deep learning frameworks on most datasets. The method is available under: https://github.com/s-marton/GRANDE 1 INTRODUCTION Heterogeneous tabular data is the most frequently used form of data (Chui et al., 2018; Shwartz-Ziv & Armon, 2022) and is indispensable in a wide range of applications such as medical diagnosis (Ulmer et al., 2020; Somani et al., 2021), estimation of creditworthiness (Clements et al., 2020) and fraud detection (Cartella et al., 2021). Therefore, enhancing the predictive performance and robustness of models can bring significant advantages to users and companies (Borisov et al., 2022). However, tabular data comes with considerable challenges like noise, missing values, class imbalance, and a combination of different feature types, especially categorical and numerical data. Despite the success of deep learning (DL) in various domains, recent studies indicate that tabular data still poses a major challenge and tree-based models like XGBoost and CatBoost outperform them in most cases (Borisov et al., 2022; Grinsztajn et al., 2022; Shwartz-Ziv & Armon, 2022). At the same time, employing end-to-end gradient-based training provides several advantages over traditional machine learning methods (Borisov et al., 2022). They offer a high level of flexibility by allowing an easy integration of arbitrary, differentiable loss functions tailored towards specific problems and support iterative training (Sahoo et al., 2017). Moreover, gradient-based methods can be incorporated easily into multimodal learning, with tabular data being one of several input types (Lichtenwalter et al., 2021; Pölsterl et al., 2021). Therefore, creating tabular-specific, gradient-based methods is a very active field of research and the need for well-performing methods is intense (Grinsztajn et al., 2022). Recently, Marton et al. (2023) introduced GradTree, a novel approach that uses gradient descent to learn hard, axis-aligned decision trees (DTs). This is achieved by reformulating DTs to a dense representation and jointly optimizing all tree parameters using backpropagation with a straight-through (ST) operator. Learning hard, axis-aligned DTs with gradient descent allows combining the advan- tageous inductive bias of tree-based methods with the flexibility of a gradient-based optimization. In this paper, we propose GRANDE, **GRAdieNt-Based Decision Tree Ensembles**, a novel approach for learning decision tree ensembles using end-to-end gradient descent. Similar to Marton et al. (2023), we use a dense representation for split nodes and the ST operator to deal with the non-differentiable nature of DTs. We build upon their approach, transitioning from individual trees to a weighted tree ensemble, while maintaining an efficient computation. As a result, GRANDE holds a significant advantage over existing gradient-based methods. Typically, DL methods are biased towards smooth solutions (Rahaman et al., 2019). As the target function in tabular datasets is usually not smooth, DL methods struggle to find these irregular functions. In contrast, models that are based on hard, axis aligned DTs learn piece-wise constant functions and therefore do not show such a bias (Grinsztajn et al., 2022). This important advantage is one inherent aspect of GRANDE, as it utilizes hard, axis-aligned DTs. This is a major difference to existing DL methods for hierarchical representations like NODE, where soft and oblique splits are used (Popov et al., 2019). Furthermore, we introduce instance-wise weighting in GRANDE. This allows learning appropriate representations for simple and complex rules within a single model, which increases the performance of the ensemble. Furthermore, we show that our instance-wise weighting has a positive impact on the local interpretability relative to other state-of-the-art methods. More specifically, our contributions are as follows: - We extend GradTree (Marton et al., 2023) from individual trees to an end-to-end gradient-based tree ensemble, maintaining efficient computation (Section 3.1). - We introduce softsign as a differentiable split function and show the advantage over commonly used alternatives (Section 3.2). - We propose a novel weighting technique that emphasizes instance-wise estimator importance (Section 3.3). We conduct an extensive evaluation on 19 binary classification tasks (Section 4) based on the predefined tabular benchmark proposed by Bischl et al. (2021). GRANDE outperforms existing methods for both, default and optimized hyperparameters. The performance difference to other methods is substantial on several datasets, making GRANDE an important extension to the existing repertoire of tabular data methods. ## 2 BACKGROUND: GRADIENT-BASED DECISION TREES GRANDE builds on gradient-based decision trees (GradTree) at the level of individual trees in the ensemble. Hence, we summarize the relevant aspects and notation of GradTree in this section and refer to Marton et al. (2023) for a complete overview. Traditionally, DTs involve nested concatenation of rules. In GradTree, DTs are formulated as arithmetic functions based on addition and multiplication to facilitate gradient-based learning. Thereby both, GradTree and GRANDE focus on learning fully-grown (i.e., complete, full) DTs which can be pruned post-hoc. A DT of depth $d$ is formulated with respect to its parameters as: $$t(x|\lambda, \tau, \iota) = \sum_{l=0}^{2^d - 1} \lambda_l L(x|l, \tau, \iota)$$ (1) where $L$ is a function that indicates whether a sample $x \in \mathbb{R}^n$ belongs to a leaf $l$, $\lambda \in \mathcal{C}^{2^d}$ denotes class membership for each leaf node, $\tau \in \mathbb{R}^{2^d - 1}$ represents split thresholds and $\iota \in \mathbb{N}^{2^d - 1}$ the feature index for each internal node. To support a gradient-based optimization and ensure an efficient computation via matrix operations, a novel dense DT representation is introduced in GradTree. Traditionally, the feature index vector $\iota$ is one-dimensional, but GradTree expands it into a matrix form. Specifically, this representation one-hot encodes the feature index, converting $\iota \in \mathbb{R}^{2^d - 1}$ into a matrix $I \in \mathbb{R}^{(2^d - 1) \times n}$. Similarly, for split thresholds, instead of a single value for all features, individual values for each feature are stored, leading to a matrix representation $T \in \mathbb{R}^{(2^d - 1) \times n}$. By enumerating the internal nodes in breadth-first order, we can redefine the indicator function $L$ for a leaf $l$, resulting in $$g(x|\lambda, T, I) = \sum_{l=0}^{2^d - 1} \lambda_l L(x|l, T, I)$$ (2) where \( L(x|l, T, I) = \prod_{j=1}^{d} (1 - p(l, j)) S(x|I_{(l,j)}, T_{(l,j)}) + p(l, j) \left( 1 - S(x|I_{(l,j)}, T_{(l,j)}) \right) \) Here, \( i \) is the index of the internal node preceding a leaf node \( l \) at a certain depth \( j \) and \( p \) indicates whether the left (\( p = 0 \)) or the right branch (\( p = 1 \)) was taken. Typically, DTs use the Heaviside step function for splitting, which is non-differentiable. GradTree reformulates the split function to account for reasonable gradients: \[ S(x|\ell, \tau) = \lfloor S(\ell \cdot x - \ell \cdot \tau) \rfloor \] Where \( S(z) = \frac{1}{1 + e^{-z}} \) represents the logistic function, \( \lfloor z \rfloor \) stands for rounding a real number \( z \) to the nearest integer and \( a \cdot b \) denotes the dot product between two vectors \( a \) and \( b \). We further need to ensure that \( \ell \) is a one-hot encoded vector to account for axis-aligned splits. This is achieved by applying a hardmax transformation before calculating \( S \). Both rounding and hardmax operations are non-differentiable. To overcome this, GradTree employs the straight-through (ST) operator during backpropagation. This allows the model to use non-differentiable operations in the forward pass while ensuring gradient propagation in the backward pass. 3 GRANDE: GRADIENT-BASED DECISION TREE ENSEMBLES One core contribution of this paper is the extension of GradTree to tree ensembles (Section 3.1). In Section 3.2, we propose softsign as a differentiable split function to propagate more reasonable gradients. Furthermore, we introduce an instance-wise weighting in Section 3.3 and regularization techniques in Section 3.4. As a result, GRANDE can be learned end-to-end with gradient descent, leveraging the potential and flexibility of a gradient-based optimization. 3.1 FROM DECISION TREES TO WEIGHTED TREE ENSEMBLES One advantage of GRANDE over existing gradient-based methods is the inductive bias of axis-aligned splits for tabular data. Combining this property with an end-to-end gradient-based optimization is at the core of GRANDE. This is also a major difference to existing DL methods for hierarchical representations like NODE, where soft, oblique splits are used (Popov et al., 2019). Therefore, we can define GRANDE as \[ G(x|\omega, L, T, I) = \sum_{e=0}^{E} \omega_e g(x|L_e, T_e, I_e) \] where \( E \) is the number of estimators in the ensemble and \( \omega \) is a weight vector. By extending \( L \) to a matrix and \( T, I \) to tensors for the complete ensemble instead of defining them individually for each tree, we can leverage parallel computation for an efficient training. As GRANDE can be learned end-to-end with gradient descent, we keep an important advantage over existing, non-gradient-based tree methods like XGBoost and CatBoost. Both, the sequential induction of the individual trees and the sequential combination of individual trees via boosting are greedy. This results in constraints on the search space and can favor overfitting, as highlighted by Marton et al. (2023). In contrast, GRANDE learns all parameters of the ensemble jointly and overcomes these limitations. 3.2 DIFFERENTIABLE SPLIT FUNCTIONS The Heaviside step function, which is commonly used as split function in DTs, is non-differentiable. To address this challenge, various studies have proposed the employment of differentiable split functions. A predominant approach is the adoption of the sigmoid function, which facilitates soft decisions (Jordan & Jacobs, 1994; Irsoy et al., 2017; Frosst & Hinton, 2017). A more recent development in this field originated with the introduction of the entmax transformation (Peters et al., 2019). Researchers utilized a two-class entmax (entmoid) function to turn the decisions more sparse (Popov et al., 2019). Further, Chang et al. (2021) proposed a temperature annealing procedure to gradually turn the decisions hard. Marton et al. (2023) introduced an alternative method for generating hard splits by using a straight-through (ST) operator after a sigmoid split function to generate hard splits. Figure 1: **Differentiable Split Functions.** The sigmoid gradient declines smoothly, while entmoid’s gradient decays more rapidly but becomes zero for large values. The scaled softsign has high gradients for small values but maintains a responsive gradient for large values, offering greater sensitivity. While this allows using hard splits for calculating the function values, it also introduces a mismatch between the forward and backward pass. However, we can utilize this to incorporate additional information: By using a sigmoid function, the distance between a feature value and the threshold is used as additional information during gradient computation. Accordingly, the gradient behavior plays a pivotal role in ensuring effective differentiation, especially in scenarios where input values are close to the decision threshold. The traditional sigmoid function can be suboptimal due to its smooth gradient decline. Entmoid, although addressing certain limitations of sigmoid, still displays an undesirable gradient behavior. Specifically, its gradient drops to zero when the difference in values is too pronounced. This can hinder the model’s ability to accommodate samples that exhibit substantial variances from the threshold. Therefore, we propose using a softsign function, scaled to \((0, 1)\), as a differentiable split function: \[ S_{ss}(z) = \frac{1}{2} \left( \frac{z}{1 + |z|} + 1 \right) \] The distinct gradient characteristics of the softsign, which are pronounced if samples are close to the threshold, reduce sharply but maintain responsive gradients if there is a large difference between the feature value and the threshold. These characteristics make it superior for differentiable splitting. This concept is visualized in Figure 1. Besides the intuitive advantage of using a softsign split function, we also show empirically that this is the superior choice (Table 4). ### 3.3 Instance-Wise Estimator Weights One challenge of ensemble methods is learning a good weighting scheme of the individual estimators. The flexibility of an end-to-end gradient-based optimization allows including learnable weight parameters to the optimization. A simple solution would be learning one weight for each estimator and using a softmax over all weights, resulting in a weighted average. However, this forces a very homogeneous ensemble, in which each tree aims to make equally good predictions for all samples. In contrast, it would be beneficial if individual trees can account for different areas of the target function, and are not required to make confident predictions for each sample. To address this, we propose an advanced weighting scheme that allows calculating instance-wise weights that can be learned within the gradient-based optimization. Instead of using one weight per estimator, we use one weight for each leaf of the estimator as visualized in Figure 2, and thus define the weights as \(W \in \mathbb{R}^{E \times 2^d}\) instead of \(\omega \in \mathbb{R}^E\). We define \(p(x; L, T, I) : \mathbb{R}^n \rightarrow \mathbb{R}^E\) as a function to calculate a vector comprising the individual prediction of each tree. Further, we define a function \(w(x; W, L, T, I) : \mathbb{R}^n \rightarrow \mathbb{R}^E\) to calculate a weight vector with one weight for each tree based on the leaf which the current sample is assigned to. Subsequently, a softmax is applied on these chosen weights for each sample. The process of multiplying the post-softmax weights by the predicted values from each tree equates to computing a weighted average. This results in \[ G(x; W, L, T, I) = \sigma(w(x; W, L, T, I)) \cdot p(x; L, T, I) \] where \[ w(x; W, L, T, I) = \begin{bmatrix} \sum_{l=0}^{2^{d-1}} W_{0,l} L(x; L_{0,l}, T_0, l_0) \\ \sum_{l=1}^{2^{d-1}} W_{1,l} L(x; L_{1,l}, T_1, l_1) \\ \vdots \\ \sum_{l=E}^{2^{d-1}} W_{E,l} L(x; L_{E,l}, T_E, l_E) \end{bmatrix}, \quad p(x; L, T, I) = \begin{bmatrix} g(x; L_{0,l}, T_0, l_0) \\ g(x; L_{1,l}, T_1, l_1) \\ \vdots \\ g(x; L_{E,l}, T_E, l_E) \end{bmatrix} \] Figure 2: **GRANDE Architecture**. This figure visualizes the structure and weighting of GRANDE for an exemplary ensemble with two trees of depth two. For each tree in the ensemble, and for every sample, we determine the weight of the leaf which the sample is assigned to. and $\sigma(z)$ is the softmax function. It is important to note that when calculating $L$ (see Equation 3), only the value for the leaf to which the sample is assigned in a given tree is non-zero. We want to note that our weighting scheme permits calculating instance-wise weights even for unseen samples. Our weighting scheme, in addition to its instance-wise nature, is substantially different to existing tree ensemble methods and post-hoc weighting schemes (He et al., 2014; Cui et al., 2023), as it is incorporated into the training procedure which is necessary to capture local interactions. Furthermore, the predictions of individual trees are not separate and changes in the instance-wise weights of one estimator directly impacts the weight of the remaining estimators. In our evaluation, we demonstrate that instance-wise weights significantly enhance the performance of GRANDE and emphasize local interpretability by learning representations for simple and complex rules within one model. ### 3.4 Regularization: Feature Subset, Data Subset and Dropout The combination of tree-based methods with a gradient-based optimization opens the door for the application of numerous regularization techniques. For each tree in the ensemble, we select a feature subset. Therefore, we can regularize our model and simultaneously, we solve the poor scalability of GradTree with an increasing number of features. Similarly, we select a subset of the samples for each estimator. Furthermore, we implemented dropout by randomly deactivating a predefined fraction of the estimators in the ensemble and rescaling the weights accordingly. ## 4 Experimental Evaluation As pointed out by Grinsztajn et al. (2022), most papers presenting a new method for tabular data have a highly varying evaluation methodology, with a small number of datasets that might be biased towards the authors’ model. As a result, recent surveys showed that tree boosting methods like XGBoost and CatBoost are still state-of-the-art and outperform new architectures for tabular data on most datasets (Grinsztajn et al., 2022; Shwartz-Ziv & Armon, 2022; Borisov et al., 2022). This highlights the necessity for an extensive and unbiased evaluation, as we will carry out in the following, to accurately assess the performance of a new method and draw valid conclusions. We want to emphasize that recent surveys and evaluation on predefined benchmarks indicate that there is no “one-size-fits-all” solution for all tabular datasets. Consequently, we should view new methods as an extension to the existing repertoire and set our expectations in line with this perspective. ### 4.1 Experimental Setup **Datasets and Preprocessing** For our evaluation, we used a predefined collection of datasets that was selected based on objective criteria from OpenML Benchmark Suites and comprises a total of 19 binary classification datasets (see Table 5 for details). The selection process was adopted from Bischl et al. (2021) and therefore is not biased towards our method. A more detailed discussion on Table 2: **Performance Comparison.** We report the test macro F1-score (mean ± stdev for a 5-fold CV) with optimized parameters. The datasets are sorted based on the data size. | Dataset | GRANDE | XGB | CatBoost | NODE | |--------------------------|-----------------|----------------|-----------------|----------------| | dresses-sales | 0.612 ± 0.049 (1) | 0.581 ± 0.059 (3) | 0.588 ± 0.036 (2) | 0.564 ± 0.051 (4) | | climate-simulation-crashes | 0.853 ± 0.070 (1) | 0.763 ± 0.064 (4) | 0.778 ± 0.050 (3) | 0.802 ± 0.035 (2) | | cylinder-bands | 0.819 ± 0.032 (1) | 0.773 ± 0.042 (3) | 0.801 ± 0.043 (2) | 0.754 ± 0.040 (4) | | wdbc | 0.975 ± 0.010 (1) | 0.953 ± 0.030 (4) | 0.963 ± 0.023 (3) | 0.966 ± 0.016 (2) | | ilpd | 0.657 ± 0.042 (1) | 0.632 ± 0.043 (3) | 0.643 ± 0.053 (2) | 0.526 ± 0.069 (4) | | tokyo1 | 0.921 ± 0.004 (3) | 0.915 ± 0.011 (4) | 0.927 ± 0.013 (1) | 0.921 ± 0.010 (2) | | qsar-biodeg | 0.854 ± 0.022 (1) | 0.853 ± 0.020 (2) | 0.844 ± 0.023 (3) | 0.836 ± 0.028 (4) | | ozone-level-8hr | 0.726 ± 0.020 (1) | 0.688 ± 0.021 (4) | 0.721 ± 0.027 (2) | 0.703 ± 0.029 (3) | | madelon | 0.803 ± 0.010 (3) | 0.833 ± 0.018 (2) | 0.861 ± 0.012 (1) | 0.571 ± 0.022 (4) | | Bioresponse | 0.794 ± 0.008 (3) | 0.799 ± 0.011 (2) | 0.801 ± 0.014 (1) | 0.780 ± 0.011 (4) | | wilt | 0.936 ± 0.015 (2) | 0.911 ± 0.010 (4) | 0.919 ± 0.007 (3) | 0.937 ± 0.017 (1) | | churn | 0.914 ± 0.017 (2) | 0.900 ± 0.017 (3) | 0.869 ± 0.021 (4) | 0.930 ± 0.011 (1) | | phoneme | 0.846 ± 0.008 (4) | 0.872 ± 0.007 (2) | 0.876 ± 0.005 (1) | 0.862 ± 0.013 (3) | | SpeedDating | 0.723 ± 0.013 (1) | 0.704 ± 0.015 (4) | 0.718 ± 0.014 (2) | 0.707 ± 0.015 (3) | | PhishingWebsites | 0.969 ± 0.006 (1) | 0.968 ± 0.006 (2) | 0.965 ± 0.003 (4) | 0.968 ± 0.006 (3) | | Amazon_employee_access | 0.665 ± 0.009 (2) | 0.621 ± 0.008 (4) | 0.671 ± 0.011 (1) | 0.649 ± 0.009 (3) | | nomao | 0.958 ± 0.002 (3) | 0.965 ± 0.003 (1) | 0.964 ± 0.002 (2) | 0.956 ± 0.001 (4) | | adult | 0.790 ± 0.006 (4) | 0.798 ± 0.004 (1) | 0.796 ± 0.004 (2) | 0.794 ± 0.004 (3) | | numerai28.6 | 0.519 ± 0.003 (1) | 0.518 ± 0.001 (3) | 0.519 ± 0.002 (2) | 0.503 ± 0.010 (4) | Normalized Mean ↑: 0.776 (1), 0.483 (3), 0.671 (2), 0.327 (4) Mean Reciprocal Rank (MRR) ↑: 0.702 (1), 0.417 (3), 0.570 (2), 0.395 (4) the selection of the benchmark can be found in Appendix A. We one-hot encoded low-cardinality categorical features and used leave-one-out encoding for high-cardinality categorical features (more than 10 categories). To make them suitable for a gradient-based optimization, we gaussianized features using a quantile transformation, as it is common practice (Grinsztajn et al., 2022). In line with Borisov et al. (2022), we report the mean and standard deviation of the test performance over a 5-fold cross-validation to ensure reliable results. **Methods** We compare our approach to XGBoost and CatBoost, which achieved superior results according to recent studies, and NODE, which is most related to our approach. With this setup, we have one state-of-the-art tree-based and one gradient-based approach for each tree type (see Table 1). In addition, we provide an extended evaluation including SAINT, RandomForest and ExtraTree as additional benchmarks in Appendix B. These additional results are in line with the results presented in the following. **Hyperparameters** We optimized the hyperparameters using Optuna (Akiba et al., 2019) with 250 trials and selected the search space as well as the default parameters for related work in accordance with Borisov et al. (2022). The best parameters were selected based on a 5x2 cross-validation as suggested by Raschka (2018) where the test data of each fold was held out of the HPO to get unbiased results. To deal with class imbalance, we further included class weights. Additional information along with the hyperparameters for each approach are in Appendix E. ### 4.2 Results **GRANDE outperforms existing methods on most datasets** We evaluated the performance with optimized hyperparameters based on the macro F1-Score in Table 2 to account for class imbalance. Additionally, we report the accuracy and ROC-AUC score in the Appendix B, which are consistent with the results presented in the following. GRANDE outperformed existing methods and achieved the highest mean reciprocal rank (MRR) of 0.702 and the highest normalized mean of 0.776. CatBoost yielded the second-best results (MRR of 0.570 and normalized mean of 0.671) followed by XGBoost (MRR of 0.417 and normalized mean of 0.483) and NODE (MRR of 0.395 and normalized mean of 0.327). Yet, our findings are in line with existing work, indicating that there is no universal method for tabular data. However, on several datasets such as climate-simulation-crashes... Table 4: **Ablation Study Summary.** Left: Comparison of different options for differentiable split functions (complete results in Table 10). Right: Comparison of our instance-wise weighting based on leaf weights with a single weight for each estimator (complete results in Table 11). | Differentiable Split Function | Weighting Technique | |------------------------------|---------------------| | Softsign | Leaf Weights | | Entmoid | Estimator Weights | | Sigmoid | | | Normalized Mean ↑ | 0.7906 (1) | | | 0.4188 (2) | | | 0.2207 (3) | | | 0.8235 (1) | | | 0.1765 (2) | | Mean Reciprocal Rank (MRR) ↑ | 0.8246 (1) | | | 0.5526 (2) | | | 0.4561 (3) | | | 0.9211 (1) | | | 0.5789 (2) | and *cylinder-bands* the performance difference to other methods was substantial, which highlights the importance of GRANDE as an extension to the existing repertoire. Furthermore, as the datasets are sorted by their size, we can observe that the results of GRANDE are especially good for small datasets, which is an interesting research direction for future work. **GRANDE efficient for large and high-dimensional datasets** GRANDE averaged 47 seconds across all datasets, with a maximum runtime of 107 seconds. Thereby, the runtime of GRANDE is robust to high-dimensional (37 seconds for *Bioresponse* with 1,776 features) and larger datasets (39 seconds for *numerai28.6* with 96,320 samples). GRANDE achieved a significantly lower runtime compared to our gradient-based benchmark NODE, which has an approximately three times higher average runtime of 130 seconds. However, it is important to note that GBDT frameworks, especially XGBoost, are highly efficient when executed on the GPU and achieve significantly lower runtimes compared to gradient-based methods. The complete runtimes are listed in the appendix (Table 9). **GRANDE outperforms existing methods with default hyperparameters** Many methods, especially DL methods, are heavily reliant on a proper hyperparameter optimization. Yet, it is a desirable property that a method achieves good results even with their default setting. GRANDE achieves superior results with default hyperparameters, and significantly outperforms existing methods on most datasets. More specifically, GRANDE has the highest normalized mean performance (0.6371) and the highest MRR (0.6404) as summarized in Table 3. **Softsign improves performance** As discussed in Section 3.2, we argue that employing softsign as split index activation propagates informative gradients beneficial for the optimization. In Table 4, we support these claims by showing a superior performance of GRANDE with a softsign activation (before discretizing with the ST operator) compared to sigmoid as the default choice as well as an entmoid function which is commonly used in related work (Popov et al., 2019; Chang et al., 2021). **Instance-wise weighting increases model performance** GRANDE uses instance-wise weighting to assign varying weights to estimators for each sample based on selected leaves. This promotes ensemble diversity and encourages estimators to capture unique local interactions. We argue that the ability to learn and represent simple, local rules with individual estimators in our ensemble can have a positive impact on the overall performance as it simplifies the task that has to be solved by the remaining estimators. As a result, GRANDE can efficiently learn compact representations for simple rules, where complex models usually tend to learn overly complex representations. ### 4.3 Case Study: Instance-Wise Weighting for the PhishingWebsites Dataset In the following case study, we demonstrate the ability of GRANDE to learn compact representations for simple rules within a complex ensemble: The *PhishingWebsites* dataset is concerned with identifying malicious websites based on metadata and additional observable characteristics. Although the task is challenging (i.e., it is not possible to solve it sufficiently well with a simple model, as shown in Table 12), there exist several clear indicators for phishing websites. Thus, some instances can be categorized using simple rules, while assigning other instances is more difficult. Ideally, if an instance can be easily categorized, the model should follow simple rules to make a prediction. Figure 4: Anchors Explanations. This figure shows the local explanations generated by Anchors for the given instance. The explanation for GRANDE only comprises a single rule. In contrast, the corresponding explanations for the other methods have significantly higher complexity, which indicates that these methods are not able to learn simple representations within a complex model. One example for a rule, which holds universally in the given dataset, is that an instance can be classified as phishing if a prefix or suffix was added to the domain name. By assessing the weights for an exemplary instance fulfilling this rule, we can observe that the DT visualized in Figure 3 accounts for 94% of the prediction. Accordingly, GRANDE has learned a very simple representation and the classification is derived by applying an easily comprehensible rule. Notably, for the other methods, it is not possible to assess the importance of individual estimators out-of-the-box similarly, as the prediction is either derived by either sequentially summing up the predictions (e.g., XGBoost and CatBoost) or equally weighting all estimators. Furthermore, this has a significant positive impact on the average performance of GRANDE compared to using one single weight for each estimator (see Table 4). Instance-wise weighting can be beneficial for local interpretability. In addition to the performance increase, our instance-wise weighting has a notable impact on the local interpretability of GRANDE. For each instance, we can assess the weights of individual estimators and inspect the estimators with the highest importance to understand which rules have the greatest impact on the prediction. For the given example, we only need to observe a single tree of depth two (Figure 3) to understand why the given instance was classified as phishing, even though the complete model is very complex. In contrast, existing ensemble methods require a global interpretation of the model and do not provide simple, local explanations out-of-the-box. However, similar explanations can be extracted using Anchors (Ribeiro et al., 2018). Anchors, as an extension to LIME (Ribeiro et al., 2016), provides model-agnostic explanations by identifying conditions (called “anchors”) which, when satisfied, guarantee a certain prediction with a high probability (noted as precision). These anchors are interpretable, rules-based conditions derived from input features that consistently lead to the same model prediction. Figure 4 shows the extracted rules for each approach. We can clearly see that the anchor extracted for GRANDE matches the rule we have identified based on the instance-wise weights in Figure 3. Furthermore, it is evident that the prediction derived by GRANDE is much simpler compared to any other approach, as it only comprises a single rule. Notably, this comes without suffering a loss in the precision, which is 1.00 for all methods. Furthermore, the rule learned by GRANDE has a significantly higher coverage, which means that the rule applied by GRANDE is more broadly representative. The corresponding experiment with additional details can be found in the supplementary material, and a more detailed discussion of the weighting statistics is included in Appendix D. 5 RELATED WORK Tabular data is the most frequently used type of data, and learning methods for tabular data are a field of very active research. Existing work can be divided into tree-based, DL and hybrid methods. In the following, we categorize the most prominent methods based on these three categories and differentiate our approach from existing work. For a more comprehensive review, we refer to Borisov et al. (2022), Shwartz-Ziv & Armon (2022) and Grinsztajn et al. (2022). **Tree-Based Methods** Tree-based methods have been widely used for tabular data due to their interpretability and ability to capture non-linear relationships. While individual trees usually offer a higher interpretability, tree ensemble methods Breiman (2001); Geurts et al. (2006), most notably gradient-boosted DTs (GBDT) are commonly used to achieve superior performance (Friedman 2001). The most prominent GBDT methods for tabular data improve the gradient boosting algorithm by for instance introducing advanced regularization (XGBoost Chen & Guestrin 2016), a special handling for categorical variables (CatBoost Prokhorenkova et al. 2018) or a leaf-wise growth strategy (LightGBM Ke et al. 2017). Regarding the structure, GRANDE is similar to existing tree-based models. The main difference is the end-to-end gradient-based training procedure, which offers additional flexibility, and the instance-wise weighting. **Deep Learning Methods** With the success of DL in various domains, researchers have started to adjust DL architectures, mostly transformers, to tabular data Gorshniy et al. (2021) Arik & Pfister (2021) Huang et al. (2020) Cai et al. (2021) Kossen et al. (2021). According to recent studies, Self-Attention and Intersample Attention Transformer (SAINT) is the superior DL method for tabular data using attention over both, rows and columns Somepalli et al. (2021). Although GRANDE, similar to DL methods, uses gradient descent for training, it has a shallow, hierarchical structure comprising hard, axis-aligned splits. **Hybrid Methods** Hybrid methods aim to combine the strengths of a gradient-based optimization with other algorithms, most commonly tree-based methods Yang et al. (2018) Abutbul et al. (2020) Hehn et al. (2020) Chen (2020) Ke et al. (2019) (2018) Katzir et al. (2020). One prominent way to achieve this is using soft DTs to apply gradient descent by replacing hard decisions with soft ones, and axis-aligned with oblique splits Frosst & Hinton (2017) Kotschieder et al. (2015) Luo et al. (2021) Hazimeh et al. (2020) Yu et al. (2021). Neural Oblivious Decision Ensembles (NODE) is one prominent hybrid method which learns ensembles of oblivious DTs with gradient descent and is therefore closely related to our work Popov et al. (2019). Oblivious DTs use the same splitting feature and threshold for each internal node at the same depth, which allows an efficient, parallel computation and makes them suitable as weak learners. In contrast, GRANDE uses standard DTs as weak learners. GRANDE can also be categorized as a hybrid method. The main difference to existing methods is the use of hard, axis-aligned splits, which prevents overly smooth solution typically inherent in soft, oblique trees. While some works demonstrate strong results of DL methods Kadra et al. (2021), recent studies indicate that, despite huge effort in finding high-performing DL methods, tree-based models still outperform DL for tabular data Grinsztajn et al. (2022) Borisov et al. (2022) Shwartz-Ziv & Armon (2022), even though the gap is diminishing McElfresh et al. (2023). One main reason for the superior performance of tree-based methods lies in the use of axis-aligned splits that are not biased towards overly smooth solutions Grinsztajn et al. (2022). Therefore, GRANDE aligns with this argument and uses hard, axis-aligned splits combined with the flexibility of a gradient-based optimization. ## 6 Conclusion and Future Work In this paper, we introduced GRANDE, a new method for learning hard, axis-aligned tree ensembles with gradient-descent. GRANDE combines the advantageous inductive bias of axis-aligned splits with the flexibility offered by gradient descent optimization. In an extensive evaluation on a predefined benchmark, we demonstrated that GRANDE achieved superior results. Both with optimized and default parameters, it outperformed existing state-of-the-art methods on most datasets. Furthermore, we showed that the instance-wise weighting of GRANDE emphasizes learning representations for simple and complex relations within a single model, which increases the local interpretability compared to existing methods. Currently, the proposed architecture is a shallow ensemble and already achieves state-of-the-art performance. However, the flexibility of a gradient-based optimization holds potential e.g., by including categorical embeddings, stacking of tree layers and an incorporation of tree layers to DL frameworks, which is subject to future work. ACKNOWLEDGMENTS This research was supported in part by the German Federal Ministry for Economic Affairs and Climate Action of Germany (BMWK), and in part by the German Federal Ministry for Environment, Nature Conservation and Nuclear Safety (BMUV). REFERENCES Ami Abutbul, Gal Elidan, Liran Katzir, and Ran El-Yaniv. Dnf-net: A neural architecture for tabular data. *arXiv preprint arXiv:2006.06465*, 2020. Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In *Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining*, pp. 2623–2631, 2019. Sercan Ö Arikan and Tomas Pfister. Tabnet: Attentive interpretable tabular learning. In *Proceedings of the AAAI conference on artificial intelligence*, volume 35, pp. 6679–6687, 2021. Bernd Bischl, Giuseppe Casalicchio, Matthias Feurer, Pieter Gijsbers, Frank Hutter, Michel Lang, Rafael Gomes Mantovani, Jan N van Rijn, and Joaquin Vanschoren. Openml benchmarking suites. In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)*, 2021. Vadim Borisov, Tobias Leemann, Kathrin Seßler, Johannes Haug, Martin Pawelczyk, and Gjergji Kasneci. Deep neural networks and tabular data: A survey. *IEEE Transactions on Neural Networks and Learning Systems*, 2022. Leo Breiman. Random forests. *Machine learning*, 45:5–32, 2001. Shaofeng Cai, Kaiping Zheng, Gang Chen, HV Jagadish, Beng Chin Ooi, and Meihui Zhang. Armnet: Adaptive relation modeling network for structured data. In *Proceedings of the 2021 International Conference on Management of Data*, pp. 207–220, 2021. Francesco Cartella, Orlando Anunciacao, Yuki Funabiki, Daisuke Yamaguchi, Toru Akishita, and Olivier Elshocht. Adversarial attacks for tabular data: Application to fraud detection and imbalanced data. *arXiv preprint arXiv:2101.08030*, 2021. Chun-Hao Chang, Rich Caruana, and Anna Goldenberg. Node-gam: Neural generalized additive model for interpretable deep learning. In *International Conference on Learning Representations*, 2021. Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In *Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining*, pp. 785–794, 2016. Yingshi Chen. Attention augmented differentiable forest for tabular data. *arXiv preprint arXiv:2010.02921*, 2020. Michael Chui, James Manyika, Mehdi Miremadi, Nicolaus Henke, Rita Chung, Pieter Nel, and Sankalp Malhotra. Notes from the ai frontier: Insights from hundreds of use cases. *McKinsey Global Institute*, 2, 2018. Jillian M Clements, Di Xu, Nooshin Yousefi, and Dmitry Efimov. Sequential deep learning for credit risk monitoring with tabular financial data. *arXiv preprint arXiv:2012.15330*, 2020. Shijie Cui, Agus Sudjianto, Aijun Zhang, and Runze Li. Enhancing robustness of gradient-boosted decision trees through one-hot encoding and regularization. *arXiv preprint arXiv:2304.13761*, 2023. Jerome H Friedman. Greedy function approximation: a gradient boosting machine. *Annals of statistics*, pp. 1189–1232, 2001. Nicholas Frosst and Geoffrey Hinton. Distilling a neural network into a soft decision tree. *arXiv preprint arXiv:1711.09784*, 2017.
MO5PiKHELW
All of the results are on a single model architecture (BERT base). On the one hand, this makes sense, since an extremely wide range of experiments are carried out. On the other hand, we don't know whether the connection between sudden drops in the loss and syntactic knowledge would apply at larger scales, with causal language modeling, etc.
Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs Angelica Chen¹ Ravid Shwartz-Ziv¹ Kyunghyun Cho¹,²,³ Matthew L. Leavitt⁴ Naomi Saphra⁵ {angelica.chen, ravid.shwartz.ziv, kyunghyun.cho}@nyu.edu matthew@datologyai.com nsaphra@fas.harvard.edu ¹NYU ²Genentech ³CIFAR LMB ⁴DatologyAI ⁵Kempner Institute, Harvard Abstract Most interpretability research in NLP focuses on understanding the behavior and features of a fully trained model. However, certain insights into model behavior may only be accessible by observing the trajectory of the training process. We present a case study of syntax acquisition in masked language models (MLMs) that demonstrates how analyzing the evolution of interpretable artifacts throughout training deepens our understanding of emergent behavior. In particular, we study Syntactic Attention Structure (SAS), a naturally emerging property of MLMs wherein specific Transformer heads tend to focus on specific syntactic relations. We identify a brief window in pretraining when models abruptly acquire SAS, concurrent with a steep drop in loss. This breakthrough precipitates the subsequent acquisition of linguistic capabilities. We then examine the causal role of SAS by manipulating SAS during training, and demonstrate that SAS is necessary for the development of grammatical capabilities. We further find that SAS competes with other beneficial traits during training, and that briefly suppressing SAS improves model quality. These findings offer an interpretation of a real-world example of both simplicity bias and breakthrough training dynamics. 1 Introduction While language model training usually leads to smooth improvements in loss over time (Kaplan et al., 2020), not all knowledge emerges uniformly. Instead, language models acquire different capabilities at different points in training. Some capabilities remain fixed (Press et al., 2023), while others decline (McKenzie et al., 2022), as a function of dataset size or model capacity. Certain capabilities even exhibit abrupt improvements—this paper focuses on such discontinuous dynamics, which are often called breakthroughs (Srivastava et al., 2022), emergence (Wei et al., 2022), breaks (Caballero et al., 2023), or phase transitions (Olsson et al., 2022). The interpretability literature rarely illuminates how these capabilities emerge, in part because most analyses only examine the final trained model. Instead, we consider developmental analysis as a complementary explanatory lens. To better understand the role of interpretable artifacts in model development, we analyze and manipulate these artifacts during training. We focus on a case study of Syntactic Attention Structure (SAS), a model behavior thought to relate to grammatical structure. By measuring and controlling the emergence of SAS, we deepen our understanding of the relationship between the internal structural traits and extrinsic capabilities of masked language models (MLMs). SAS occurs when a model learns specialized attention heads that focus on a word’s syntactic neighbors. This behavior emerges naturally during conventional MLM pre-training (Clark et al., 2019; Voita et al., 2019; Manning et al., 2020). We observe an abrupt spike in SAS at a consistent point in training, and explore its impact on MLM capabilities by manipulating SAS during training. Our observations paint a picture of how interpretability artifacts may represent simplicity biases that compete with other learning strategies during MLM training. In summary, our main contributions are: • Monitoring latent syntactic structure (defined in Section 2.1) throughout training, we identify (Section 4.1) a precipitous loss drop composed of multiple phase transitions (defined in Section 2.3). relating to various linguistic abilities. At the onset of this stage (which we call the **structure onset**), SAS spikes. After the spike, the model starts handling complex linguistic phenomena correctly, as signaled by a break in BLiMP score (which we call the **capabilities onset**). Although the functional complexity of the model declines for the rest of training, it increases between these breaks. - We introduce a regularizer to examine the causal role of SAS (defined in Section 2.2) and use it to show that SAS is necessary for handling complex linguistic phenomena (Section 4.2) and that SAS competes with an alternative strategy that exhibits its own break in the loss curve, which we call the **alternative strategy onset**. - Section 4.3 shows that briefly suppressing SAS improves model quality and accelerates convergence. Suppressing past the alternative strategy onset damages performance and blocks SAS long-term, suggesting this phase transition terminates a critical learning period. ![Figure 1](image) **Figure 1**: BERT first learns to focus on syntactic neighbors with specialized attention heads, and then exhibits grammatical capabilities in its MLM objective. The former (internal) and the latter (external) model behaviors both emerge abruptly, at moments we respectively call the **structure onset** (▲) and **capabilities onset** (●) (quantified as described in Section 2.3). We separately visualize three runs with different seeds, noting that these seeds differ in the stability of Unlabeled Attachment Score (UAS; see Section 2.1) after the structure onset, but uniformly show that SAS emerges almost entirely in a brief window of time. We show (a) MLM loss, with 95% confidence intervals across samples by nonparametric bootstrapping; (b) internal grammar structure, measured by UAS on the parse induced by the attention distributions; and (c) external grammar capabilities, measured by average BLiMP accuracy with 95% confidence intervals across tasks by nonparametric bootstrapping. ## 2 METHODS ### 2.1 SYNTACTIC ATTENTION STRUCTURE One proposal for interpreting attention is to treat some attention weights as syntactic connections (Manning et al., 2020; Voita et al., 2019; Clark et al., 2019). Our method is based on Clark et al. (2019), who find that some specialized attention heads focus on the target word’s dependency relations. Dependency parses describe latent syntactic structure. Each word in a sentence has a word that it modifies, which is its parent in the syntax tree. Each dependency is labeled—e.g., an adjective modifies a noun through an `amod` relation in the Universal Dependencies annotation system (Nivre et al., 2017). In the example that follows, when an MLM predicts the word `nests`, it is likely to rely heavily on its syntactic relations `builds` and `ugly`. One head may attend to adjectival modifiers like `ugly` while another attends to direct objects like `builds`. We call this tendency to form heads that specialize in specific syntactic relations Syntactic Attention Structure (SAS). To measure SAS, we follow Clark et al. (2019) in using a simple probe based off... the surface-level attention patterns, detailed in Appendix A. The probe provides an implicit parse, with an accuracy measured by **unlabeled attachment score** (UAS). ### 2.2 Controlling SAS In addition to training models with BERT\textsubscript{Base} parameters, we also train models where SAS is promoted or suppressed. The model with SAS promoted throughout training is called BERT\textsubscript{SAS+}, while the model with SAS suppressed throughout training is called BERT\textsubscript{SAS-}. In order to adjust SAS for these models, we train a BERT\textsubscript{Base} model through methods that are largely conventional (Section 3.1), with one difference. We add a **syntactic regularizer** that manipulates the structure of the attention distributions using a syntacticity score $\gamma(x_i, x_j)$, equal to the maximum attention weight between syntactically connected words $i$ and $j$. We use this regularizer to penalize or reward higher attention weights on a token’s syntactic neighbors by adding it to the MLM loss $L_{MLM}$. We scale the regularizer by a constant coefficient $\lambda$ which may be negative to promote SAS or positive to suppress SAS. If we denote $D(x)$ as the set of all dependents of $x$, then the new loss is: $$L(x) = L_{MLM}(x) + \lambda \sum_{i=1}^{|x|} \sum_{x_j \in D(x_i)} \gamma(x_i, x_j)$$ ### 2.3 Identifying Breakthroughs This paper studies **breakthroughs**: sudden changes in model behavior during a brief window of training. We use the term **breakthroughs** interchangeably with **phase transitions**, **breaks**, and **emergence**, as has been done in past literature (Olsson et al., 2022; Srivastava et al., 2022; Wei et al., 2022; Caballero et al., 2023). What do we consider to be a breakthrough, given a metric $f$ at some distance (e.g., in timesteps) from initialization $d$? We are looking for break point $d^*$ with the sharpest angle in the trajectory of $f$, as determined by the slope between $d^*$ and $d^* \pm \Delta$ for some distance $\Delta$. If we have no measurements at the required distance, we infer a value for $f$ based on the available checkpoints—e.g., if $d$ is measured in discrete timesteps, we calculate the angle of loss at 50K steps for $\Delta = 5K$ by imputing the loss from checkpoints at 45K and 55K steps to calculate slope. $$\text{break}(f, \Delta) = \arg \max_t [f(t + \Delta) - f(t)] - [f(t) - f(t - \Delta)]$$ In other words, $\text{break}(f, \Delta)$ is the point $t$ that maximizes the difference between the slope from $f(t)$ to $f(t + \Delta)$ and the slope from $f(t - \Delta)$ to $f(t)$, approximating the point of maximum acceleration. ### 3 Models and Data #### 3.1 Architecture and Training We pre-train BERT\textsubscript{Base} models using largely the same training set-up and dataset as Sellam et al. (2022). We use the uncased architecture with 12 layers of 768 dimensions each and train with the AdamW optimizer (Loshchilov & Hutter, 2019) for 1M steps with learning rate of 1e-4, 10,000 warm-up steps and training batch size of 256 on a single 4 × 100 NVIDIA A100 node. Our results only consider checkpoints that are recorded while pretraining remains numerically stable for all seeds, so we only analyze up to 300K steps. Our training set-up departs from the original BERT set-up (Devlin et al., 2019) in that we use a fixed sequence length of 512 throughout training, which was shared by Sellam et al. (2022). We also use the same WordPiece-based tokenizer as Devlin et al. (2019) and mask tokens with 15% probability. Unless otherwise stated, all experiments are implemented with the HuggingFace transformers (v4.12.5) (Wolf et al., 2020), Huggingface datasets (v2.7.1) (Lhoest et al., 2021), and Pytorch (v1.11) (Paszke et al., 2019) libraries. Our pre-training datasets consist of BookCorpus (Zhu et al., 2015) and English Wikipedia (Foundation, 2022). Since we do not have access to the original BERT pre-training dataset, we use a more recent Wikipedia dump from May 2020. For pre-training runs where syntactic regularization is applied, we dependency parse Wikipedia with spacy (Honnibal & Montani, 2017) for our silver standard labels. 3.2 Finetuning and Probing **Fine-tuning on GLUE** Our fine-tuning set-up for each GLUE task matches that of the original paper (Wang et al., 2018), with initial learning rate $1e-4$, batch size of 32, and 3 total epochs. **Evaluating on BLiMP** BLiMP (Warstadt et al., 2020a) is a benchmark of minimal pairs for evaluating knowledge of various English grammatical phenomena. We evaluate performance using the MLM scoring function from Salazar et al. (2020) to compute the pseudo-log-likelihood of the sentences in each minimal pair, and counting the MLM as correct when it assigns a higher value to the acceptable sentence in the pair. Further implementation details are in Appendix D. **Evaluating SAS dependency parsing** We measure SAS by evaluating the model’s implicit best-head attention parse (Eq. (3), Clark et al., 2019) on a random sample of 1000 documents from the Wall Street Journal portion of the Penn Treebank (Marcus et al., 1999), with the syntax annotation labels converted into the Stanford Dependencies format by the Stanford Dependencies converter (Schuster & Manning, 2016). We evaluate parse quality using the Unlabeled Attachment Score (UAS) computed from the attention map, as described in Eq. (5). 4 Results Often, interpretable artifacts are assumed to be essential to model performance. However, evidence for the importance of SAS exists only at the instance level on a single trained model. We know that specialized heads can predict dependencies (Clark et al., 2019) and that pruning them damages performance more than pruning other heads (Voita et al., 2019). However, these results are only weak evidence that SAS is essential for modeling grammar. Passive observation of a trained model may discover artifacts that occur as a side effect of training without any effect on model capabilities. Causal methods that intervene on particular components at test time (Vig et al., 2020; Meng et al., 2023), meanwhile, may interact with the rest of the model in complex ways, spuriously implying a component to be essential for performance when it could be removed if it were not so entangled with other features. They also only address whether a component is necessary at test time, and not whether that component is necessary during learning. Both test-time approaches—passive observations and causal interventions—are limited. We begin by confirming the assumption that SAS must be essential to performance. To motivate the case for skepticism of the role of SAS, we note a lack of correlation between SAS metrics and model capabilities across random pretraining seeds (Appendix E). After first strengthening the evidence for SAS as a meaningful phenomenon by taking model development into account, we then draw connections to the literature on phase transitions, simplicity bias, and model complexity. 4.1 The Syntax Acquisition Phase Most work on scaling laws (Kaplan et al., 2020) presents test loss as a quantity that homogeneously responds to the scale of training, declining by a power law relative to the size of the corpus. In the MLM setting, we instead identify a precipitous drop in the loss curve of BERT\textsubscript{Base} (Fig. 1(a)), consistently spanning 20K-30K timesteps of training across various random seeds. We now show how this rapid learning stage can be interpreted as the composition of two distinct phase transitions. The MLM loss drop occurs alongside the acquisition of grammatical capabilities in two consecutive stages, each distinguished by breaks as defined by Eq. 2. The first stage aligns with the formation of SAS—we call this break in implicit parse UAS the structure onset. As seen in Fig. 1(b), the UAS spikes at a consistent time during each run, in tandem with abrupt improvements in MLM loss (Fig. 1(a)) and finetuning metrics (Fig. 2(b)). Immediately following the spike, UAS plateaus, but the loss continues to drop precipitously before leveling off. The second part of this loss drop is associated with a break in the observed grammatical capabilities of the model, as measured by accuracy on BLiMP (Fig. 1(c)). We call the BLiMP break the capabilities onset. We show similar trajectories on the MultiBERTs (Sellam et al., 2022) reproductions (Appendix I). By observing these phase transitions, we can see that the internal representation of grammar, in the form of syntactic attention, precipitates the external observation of grammatical behavior, in the form of correct language modeling judgements on linguistically challenging examples. This is not only a single breakthrough during training, but a sequence of breakthroughs that appear to be dependent on each other. We might compare this to the “checkmate in one” BIG-Bench task, a known breakthrough behavior in autoregressive language models (Srivastava et al., 2022). Only at a large scale can models accurately identify checkmate moves, but further exploration revealed that the model was progressing in a linear fashion at offering consistently valid chess moves before that point. The authors posited that the checkmate capability was dependent on the ability to make valid chess moves, and likewise it seems we have found that grammatical capabilities are dependent on a latent representation of syntactic structure in the form of SAS. We find that the existence of these phase transitions holds even when using continuous metrics (Appendix I), in contrast to Schaeffer et al. (2023), who found that many abrupt improvements in capabilities are due to the choice of thresholded metrics like accuracy. We also find that the phase transitions hold even when setting the x-axis to some continuous alternative to discrete training timesteps, such as weight norm (Appendix F). Thus both x-axis and y-axis may use non-thresholded scales, and the phase transitions remain present. Complexity and Compression The capabilities of language models have often been explained as a form of compression (Chiang, 2023; Cho, 2023; Sutskever, 2023), implying a continual decrease in functional complexity throughout training. A more nuanced view of training is suggested by work on critical learning periods (Achille et al., 2018) and the information bottleneck (IB) theory (Shwartz-Ziv & Tishby, 2017b), which tie phase transitions to shifts in complexity (Arnold et al., 2023). When we plot various complexity metrics (see Fig. 2(a) for TwoNN intrinsic dimension (Facco et al., 2017), and Appendix I.2 for additional metrics), we see a decline in complexity for most of training. However, an abrupt memorization phase of increasing complexity occurs between the structure and capabilities onsets, during which the model rapidly acquires information. (Although the mid-training timing of this memorization phase is novel, these dual phases are supported by the literature on IB and critical learning periods, as further discussed in Appendix C.) Taken together, we see that the structure and capabilities onsets are not just phase transitions of internal structure and external capabilities, but also phase transitions of functional complexity. The steep decrease in complexity that immediately precedes the structure onset (Fig. 2(a)) supports the intuitive understanding that SAS must be simplistic in order to be human interpretable. Generally, models tend to favor simpler functions like SAS earlier in training (Hermann & Lampinen, 2020; Shah et al., 2020; Nakkiran et al., 2019; Valle-Pérez et al., 2019; Arpit et al., 2017), a tendency often referred to as simplicity bias. In Section 4.3 we use the simplicity bias literature to motivate our hypotheses about and interventions for competing strategies. 4.2 Controlling SAS Having established the natural emergence of SAS, we use our syntacticity regularizer (Section 2.2) to evaluate whether SAS is truly necessary for handling complex grammatical phenomena. We confirm that this regularizer can suppress or accelerate the SAS phase (Fig. 3(b)). As seen in Fig. 3(a), enhancing SAS behavior throughout training (BERT\textsubscript{SAS+}) leads to early improvements in MLM performance, but hurts later model quality.\footnote{Note that in BERT\textsubscript{SAS+}, we see the capabilities onset is after the structure onset, but before the SAS plateau, suggesting that SAS only needs to hit some threshold to precipitate the capabilities onset, and does not need to stabilize.} Conversely, suppressing SAS (BERT\textsubscript{SAS-}) damages both early performance and long term performance. Suppressing SAS during training prevents the emergence of linguistically complex capabilities (Fig. 3(c)). In other words, preventing the internal grammar structure onset will also prevent the external grammar capabilities onset that follows it. However, there exists an early apparent phase transition in the MLM loss (at around 6K steps), which suggests that an alternative strategy emerges that leads to improvements prior to the structure onset. We therefore refer to this early inflection as the \textbf{alternative strategy onset}. Our results suggest that SAS is crucial for effectively representing grammar, but the existence of the alternative strategy onset implies that SAS also competes with other useful traits in the network. We explore the alternative strategy onset represented by the phase transition under SAS suppression in Appendix M. Importantly, the break in the loss curve occurs earlier in training when suppressing SAS. The implication is profound: that the alternative strategy is competing with SAS, and suppressing SAS permits the model to learn the alternative strategy more effectively and earlier. Inspired by this insight, we next ask whether there can be larger advantages to avoiding the natural SAS-based strategy early in training, thus claiming the benefits of the alternative strategy. 4.3 Early-stage SAS Regularization Because BERT\textsubscript{SAS-} briefly outperforms both BERT\textsubscript{Base} and BERT\textsubscript{SAS+}, we have argued that suppressing SAS implicitly promotes a competing strategy. This notion of competition between features or strategies is well-documented in the literature on simplicity bias (Shah et al., 2020; Arpit et al., 2017; Hermann & Lampinen, 2020; Pezeshki et al., 2021). Achille et al. (2018) find that some patterns must be acquired early in training in order to be learned at all, so depending too much on an overly simplistic strategy early in training can have significant long-term consequences on performance. To test the hypothesis that learning SAS early allows SAS to out-compete other beneficial strategies, this section presents experiments that only suppress the early acquisition of SAS. For multistage regularized models, we first suppress SAS with $\lambda = 0.001$ and then set $\lambda = 0$ after a pre-specified timestep in training. These models are named after the timestep that SAS is suppressed until, e.g., BERT\textsuperscript{(3k)}\textsubscript{SAS} is the model where $\lambda$ is set to 0 at timestep 3000. We find that suppressing SAS early on improves the effectiveness of training later. Specifically, BERT\textsuperscript{(3k)}\textsubscript{SAS} outperforms BERT\textsubscript{Base} even well after both models pass their structure and capabilities onsets (Fig. 4(a), Table I), although these advantages cease to be significant after longer training runs. Figure 4: Metrics for the checkpoint at 100k steps, for various models with SAS suppressed early in training. The vertical line marks the BERT\textsubscript{SAS-} alternative strategy onset; note that model quality is worst when the regularizer is changed during this phase transition. The x-axis reflects the timestep when the regularizer $\lambda$ is changed from 0.001 to 0. To control for the length of training time without suppressing SAS, Appendix Q presents the same findings measured at a checkpoint exactly 50K timesteps after releasing the regularizer. On y-axis: (a) MLM loss; (b) Implicit parse accuracy (UAS); (c) GLUE average (Task breakdown in Appendix P); (d) BLiMP average (Task break down in Appendix K). Shaded regions represent 95% confidence intervals across three seeds. | Model | MLM Loss ↓ | GLUE average ↑ | BLiMP average ↑ | |----------------|------------|---------------|----------------| | BERT\textsubscript{Base} | 1.77 ± 0.01 | 0.74 ± 0.01 | 0.74 ± 0.02 | | BERT\textsubscript{SAS+} | 2.39 ± 0.03 | 0.59 ± 0.01 | 0.74 ± 0.01 | | BERT\textsubscript{SAS-} | 2.02 ± 0.01 | 0.69 ± 0.02 | 0.67 ± 0.03 | | BERT\textsubscript{(3k)} | 1.75 ± 0.01 | 0.74 ± 0.00 | 0.75 ± 0.01 | Table 1: Evaluation metrics, with standard error, after training for 100K steps (~ 13M tokens), averaged across three random seeds for each regularizer setting. We selected BERT\textsubscript{(3k)} as the best multistage hyperparameter setting based on MLM test loss at 100K steps. Bolded values significantly outperform non-bolded values in the same column under a 1-sided Welch’s $t$-test. (Appendix P). Some multistage models even have more consistent SAS than BERT\textsubscript{Base} (Fig. 4(b)). We posit that certain associative patterns are learned more quickly while suppressing SAS, and these patterns not only support overall performance but even provide improved features to acquire SAS. 4.3.1 When can we recover the SAS phase transition? Inspecting the learning curves of the temporarily suppressed models, we find that briefly suppressing SAS can promote performance (Appendix N) and accelerate the structure onset (Fig. 5(a)) while augmenting it (Fig. 5(b)). However, after more prolonged suppression of SAS, it becomes impossible to hit the dramatic spike in implicit parse UAS that we see in BERT\textsubscript{Base} (Section 4.3). If the SAS phase transition is prevented, MLM performance falls significantly compared to BERT\textsubscript{Base} and we see no SAS spike (Appendix N). It appears that we must choose between phase transitions; the model cannot undergo first the alternative strategy onset and then the structure onset. In fact, we measure the worst model quality when we switch settings during the alternative strategy transition (Fig. 4). Figure 5: If SAS is suppressed only briefly, it accelerates and augments the SAS onset. However, further suppression delays and attenuates the spike in UAS, until it eventually ceases to show a clear inflection. A vertical dotted line marks the BERT\textsubscript{SAS}-alternative strategy onset and the shaded region indicates the 95% confidence interval across three seeds. 5 DISCUSSION AND CONCLUSIONS Our work is a response to the limitations of probes that analyze only a single model checkpoint without regard to its training history (Saphra, 2023). We posit that developmental explanations, which incorporate a model’s training history, provide critical perspective and explanatory power. We have used this developmental approach to demonstrate the necessity of SAS for grammatical reasoning in MLMs, and have furthermore used SAS as a case study to shed light on alleviating simplicity bias, the dynamics of model complexity, and the risks of changing the optimizer during phase transitions. Our work also guides further understanding of many deep learning phenomena, and may inspire a more rigorous approach to science of deep learning. For further discussion Appendix C presents an extended review of related work on causal interpretability, simplicity bias, and phase transitions. 5.1 EARLY DYNAMICS AND SIMPLICITY BIAS Sutton (2019) introduced the machine learning world to the Bitter Lesson: models that use informed priors based on domain understanding will always lose to generic models trained on large quantities of data. Our work suggests that we might go further: even generically learned structure can form a disadvantageously strong prior, if that structure reflects human expert models of syntax. In other words, human interpretations of natural phenomena are so simplistic that their presence early in training can serve as a negative signal. If this observation holds in natural language—a modality that has evolved specifically to be human interpretable—how much worse might simplicity bias impact performance on other domains like scientific and physical modeling? Dependency and competition We have found evidence of multiple possible relationships between emergent behaviors. Previous work suggests that model properties can depend on one another, e.g., checkmate-in-one capabilities depend on first learning valid chess moves (Srivastava et al., 2022); or compete with one another, e.g., as sparse and dense representation strategies compete on arithmetic tasks (Merrill et al., 2023). Similarly, we first find evidence of a dependency relationship, based on our evidence that SAS is a prerequisite for many linguistic capabilities as indicated by BLiMP. Then, we identify a competitive relationship, based on our observations that suppressing SAS leads to an alternative strategy that prioritizes context differently. These distinct relationships shed light on how model behaviors interact during training and may suggest training improvements that delay or promote particular behaviors. Existing work in simplicity bias (Shah et al., 2020; Pezeshki et al., 2021) suggests that a preference for simple heuristics might prevent the model from acquiring a more reliable strategy. Our results appear to be evidence of this pitfall in practice. Pretraining The benefits of early training without permitting SAS bear an intriguing parallel to pretraining. Just as pretraining removes the particulars of the downstream task by training on generic language structure, early SAS suppression removes the particulars of linguistic structure itself. In doing so, we encourage the MLM to treat the entire sequence without regard for proximity to the target word, as a bag-of-words model might. Therefore, the beginning of training is even more unstructured and generic than it would be under the baseline MLM objective. Curriculum learning We also offer some insights into why curriculum learning is rarely effective at large scales. Simple data is likely to encourage simplistic strategies, so any curriculum that homogenizes the early distribution could promote a simplistic strategy, helping early performance but harming later performance. Predictably, curricula no longer help at large scales (Wu et al., 2021). 5.2 Phase transitions Instability at critical points Abrupt changes are rarely documented directly at the level of validation loss, but we show that they may be observed—and interpreted—in realistic settings. Smooth improvements in loss may even elide abrupt breakthroughs in specific capabilities, as discussed in Appendix M.1. Our multistage results point to a surprising effect: that the worst time to change the regularization is during a phase transition. When we release the SAS suppression well before the point at which the alternative transition starts during BERT\textsubscript{SAS} training (i.e., the alternative strategy onset), we find it is possible to recover the SAS transition, preventing damage to GLUE, BLiMP, and MLM loss metrics. Likewise, although releasing the regularization after the alternative transition prevents the recovery of SAS, it nonetheless incurs limited damage to model quality metrics. However, releasing the regularizer during the phase transition leads to a substantially worse model under every metric. These findings suggest that the moment of breakthrough constitutes a critical point where an optimizer misstep can damage the performance of the model, possibly even at convergence. This phenomenon may be consequential for future optimization research. 5.3 Interpretability epistemology While SAS was already known to emerge naturally in MLMs, there were reasons to be skeptical of its necessity. One objection is that raw attention distribution information is not a guaranteed proxy for information flow (Abnar & Zuidema, 2020; Elhayarajh & Jurafsky, 2021). Another thread questions the interpretability of attention by obfuscating the attention weights without damaging model performance (Jain & Wallace, 2019; Serrano & Smith, 2019). If the fundamentally informative nature of attention is subject to extensive debate (Bibal et al., 2022), we must also be skeptical of overstating its connection to syntax. Attention syntacticity is a microcosm of wider failures in the science of deep learning, which has been criticized for a tendency to use anecdotal observations and post-hoc explanations, rather than rigorous correlational or causal tests (Forde & Paganini, 2019). Prior evidence for the importance of SAS came in two forms, both of which operate post-hoc at the instance level on specific samples: instance-level observation in fully trained networks (Clark et al., 2019) and instance-level causal experiments in fully trained networks (Voita et al., 2019). Observational studies might discover structures that emerge as a side effect of training, rather than those crucial to the operation of the model. Traits that emerge as a side effect of a process but appear crucial to performance are called spandrels in evolutionary biology; possible examples include human chins (Yong, 2016) and enjoyment of music (Pinker, 1997). While instance-level causal experiments like Voita et al. (2019) may be epistemically stronger than the observational studies, the network’s failure to recover from a causal intervention does not indicate that it relies on the structure provided. Instead, the network may be more brittle to large distribution shifts on the relevant features, without truly relying on those features (Tucker et al., 2021). One possible scenario is that a behavior may develop early in training and become vestigial (like a human’s tailbone (Mukhopadhyay et al., 2012)) but sufficiently integrated into subnetworks that generate and cancel information that the network cannot easily recover from its removal. To support the skeptical case, we find that SAS metrics were not correlated with MLM capabilities across random seed (Fig. 6). We provide several epistemically strong results in favor of the importance of SAS. First, we study models in development (Section 4.1), finding that the SAS phase transition directly precipitates the emergence of linguistic capabilities. This result supports that blackbox grammatical capabilities depend on measurable internal structures. Second, we have causal interventions on development (Section 4.2), which again reveal the importance of this head specialization behavior by promoting and suppressing it. Instance-level interpretability methods, at best, offer evidence that a trait emerges and the model cannot recover from its removal; we can now say that certain capabilities depend on this trait—although the model eventually discovers alternative ways to represent some of them. ACKNOWLEDGMENTS We thank Samuel R. Bowman and Jason Phang for helpful discussions during the development of this project. This work was supported by National Science Foundation Award 1922658, Hyundai Motor Company (under the project Uncertainty in Neural Sequence Modeling), and the Samsung Advanced Institute of Technology (under the project Next Generation Deep Learning: From Pattern Recognition to AI). Matthew Leavitt was employed at Mosaic ML and Naomi Saphra was employed at NYU for the majority of the work on this project. REFERENCES Samira Abnar and Willem Zuidema. Quantifying attention flow in transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4190–4197, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.385. URL https://aclanthology.org/2020.acl-main.385 Alessandro Achille, Matteo Rovere, and Stefano Soatto. Critical learning periods in deep networks. In International Conference on Learning Representations, 2018. Alessio Ansuini, Alessandro Laio, Jakob H Macke, and Davide Zoccolan. Intrinsic dimension of data representations in deep neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/cfcce0621b49c983991ead4c3d4d3b6b-Paper.pdf Julian Arnold, Niels Lörch, Flemming Holtorf, and Frank Schäfer. Machine learning phase transitions: Connections to the fisher information, 2023. Devansh Arpit, Stanislaw Jastrzębski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, and Simon Lacoste-Julien. A Closer Look at Memorization in Deep Networks. arXiv:1706.05394 [cs, stat], June 2017. URL http://arxiv.org/abs/1706.05394 arXiv: 1706.05394. Boaz Barak, Benjamin L. Edelman, Surbhi Goel, Sham Kakade, Eran Malach, and Cyril Zhang. Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit, July 2022. URL http://arxiv.org/abs/2207.08799 arXiv:2207.08799 [cs, math, stat]. Ido Ben-Shaul, Raviv Shwartz-Ziv, Tomer Galanti, Shai Dekel, and Yann LeCun. Reverse engineering self-supervised learning. arXiv preprint arXiv:2305.15614, 2023. Adrien Bibal, Rémi Cardon, David Alfter, Rodrigo Wilkens, Xiaou Wang, Thomas François, and Patrick Watrin. Is Attention Explanation? An Introduction to the Debate. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 3889–3900, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.269. URL https://aclanthology.org/2022.acl-long.269 Ethan Caballero, Kshitij Gupta, Irina Rish, and David Krueger. Broken neural scaling laws. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=sckjveq1CZ David C. Chiang, Sung-Feng Huang, and Hung-yi Lee. Pretrained Language Model Embryology: The Birth of ALBERT. arXiv:2010.02480 [cs], October 2020. URL http://arxiv.org/abs/2010.02480 arXiv: 2010.02480. Ted Chiang. ChatGPT Is a Blurry JPEG of the Web. The New Yorker, February 2023. ISSN 0028-792X. URL https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web Section: annals of artificial intelligence.
fIKRJeLH7W
The results show that the searched BCs prefer to be connected to the front layers of SNNs. Is it because adding such connection can alleviate the problem of gradient explosion or gradient disappearance in deep networks?
PROPER BACKWARD CONNECTION PLACEMENT BOOSTS SPIKING NEURAL NETWORKS Anonymous authors Paper under double-blind review ABSTRACT We study how backward connections (BCs, also known as temporal feedback connections) impact the performance of Spiking Neural Networks (SNNs) and how to effectively search the placement of BC to boost SNNs’ performance. Presumably, BCs have the potential to enhance SNNs’ representation capacity by creating new temporal pathways in the SNN. However, we empirically find that BCs at different positions are not equally beneficial to SNNs, and some of them may even lead to performance degradation. Given the large search space of BCs placement, we propose Backward Connection Neural Architecture Search (BCNAS-SNN), a framework that automatically identifies the optimal BCs pattern by searching all possible BCs including both intra-block and inter-block ones. Extensive experiments indicate that BCNAS-SNN is able to achieve state-of-the-art results on the CIFAR10, CIFAR100, and Tiny-ImageNet datasets, with accuracy of 95.67%, 78.59%, and 63.43%, respectively. A set of ablation studies are further presented to understand the efficacy of each design component in BCNAS-SNN. 1 INTRODUCTION Spiking neural networks (SNNs) are considered the third generation of neural networks [Maass (1997); Roy et al. (2019); Christensen et al. (2022)] due to their unique properties, such as asynchronous computation, low power consumption [Akopyan et al. (2015); Davies et al. (2018)], and inherent temporal dynamics. SNNs transmit information in the form of binary spikes during multiple time steps and thus enjoy the advantage of multiplication-free inference against artificial neural networks (ANNs). To improve the performance of SNNs, many efforts have been made to design various training algorithms and spiking neurons. Researchers have focused on the surrogate gradient for the firing function [Wu et al. (2018); Bellec et al.] the loss function [Deng et al. (2021)], and the normalization layers [Zheng et al. (2021); Duan et al.]. However, relatively few studies have explored the architecture design of SNNs, particularly with regard to backward connections (BCs) [Panda et al. (2020)]. SNNs are composed of Leaky-Integrate-and-Fire (LIF) [Izhikevich (2003)] neurons, which store and transmit temporal information. The backward connection is a promising component to make better use of this temporal information in SNNs. For instance, SRNNs [Yin et al. (2020)], in which each layer has recurrent connections, is able to achieve better performance than its counterpart without BC. BackEISNN [Zhao et al. (2022)] applies adaptive time-delayed self-feedback to regulate the precision of spikes. However, these studies use fixed backward connections for each spiking neuron. SNASNet [Kim et al. (2022)] leverages a cell-based Neural Architecture Search (NAS) method to search the inter-block BCs. While the question of how to effectively add global backward connections to SNNs remains an open problem. In this paper, we aim to study the global backward connections in SNNs which can link any two layers regardless of feature resolution. To enable such a global BC, we leverage an upsample layer and a $1 \times 1$ spiking convolutional layer to send the activations (Figure 2A). In order to locate the optimal global BCs, we propose BCNAS-SNN, an efficient BCs searching framework based on an evolutionary algorithm. Based on the characteristic of BCs, we introduce two additional design components to improve and accelerate the searching process: 1) We carefully craft the search space Figure 1: The optimal BCs are always connected to the front LIF layers. We have drawn all searched backward connections based on ResNets across various datasets, including CIFAR10/100, CIFAR10DVS and Tiny-Imagenet. More details are included in Appendix C. and the mutation/crossover for BCs to balance exploration and exploitation; 2) We initialize the evolutionary algorithm by score-based selection from a large population of random architectures. Extensive experiments on a variety of datasets demonstrate that BCNAS-SNN is able to discover appropriate BCs, which improve standard SNNs’ accuracy. Additionally, as shown in Figure 1, an intriguing phenomenon is that the searched backward connections prefer to be connected to the beginning of SNNs which motivates future architecture design of SNNs. In summary, our key contributions are: • To the best of our knowledge, this work is the first study thoroughly investigating the inequal effects of global backward connections in SNNs. • We develop an efficient spike-based NAS framework with well-crafted search space and strategy. The score-based selective initialization allows the evolutionary search algorithm to converge fast and obtain optimal BCs. • Extensive experiments on both static datasets and neuromorphic datasets demonstrate the importance of appropriate BCs for SNNs and validate the effectiveness of our proposed search framework. For the CIFAR10/100 and Tiny-ImageNet datasets, the searched architectures achieve top-1 accuracy of 95.67%, 78.59%, and 63.43% for the first time, outperforming existing state-of-the-art results. 2 RELATED WORK 2.1 SPIKING NEURAL NETWORKS The binary spike mechanism in SNNs leads to the non-differentiability of the activation function [Neftci et al., 2019], which disables the typically used gradient descent approaches. Direct training approaches utilizing surrogate gradients [Wu et al., 2018; Shrestha & Orchard, 2018; Fang et al., 2021; Li et al., 2021; Deng et al., 2021] have demonstrated superior performance with smaller time steps. For instance, the RecDis-SNN method proposed in [Guo et al., 2022] rectifies the membrane potential distribution to enhance performance. Additionally, the IM-Loss method presented in [Guo et al.] maximizes the information flow in order to enhance expressiveness. Furthermore, the GLIF method [Yao et al.] introduces a gate mechanism to increase the representation space of spiking neurons. In addition to designing training algorithms and spiking neurons, some studies focus on the architecture of SNNs. NASSNN [Kim et al., 2022] and SpikeDHS [Che et al.] adopt NAS to search optimal SNN architectures based on ANN search spaces [Liu et al., 2018; Dong & Yang, 2019]. AutoSNN [Na et al., 2022] utilizes NAS to search different convolution blocks in SNNs. In this study, we will illustrate that properly incorporating one or two backward connections to ResNet [He et al., 2016] structures will improve SNNs significantly. 2.2 NEURAL ARCHITECTURE SEARCH Neural Architecture Search (NAS) was proposed to automatically select high-performance neural networks. At the early stage, NAS methods [Baker et al., 2016; Zoph & Le, 2016; Zoph et al., 2018] needed to train candidate architectures from scratch which took much time. To alleviate the expensive search cost, many lightweight NAS methods have been proposed, including one-shot weight-sharing NAS and zero-cost (ZC) proxies. For one-shot weight-sharing NAS [Pham et al., 2018; Guo et al., 2020; Cai et al., 2020; Yan et al., 2021], the search process can be divided into two phases: training the super-network and evaluating the candidate architectures. In the training phase, architectures are randomly sampled in each training loop, and all candidate architectures in the search space can be roughly trained. In the evaluation phase, architectures inheriting weights from the trained super-network can be directly evaluated on the validation dataset. To further improve the search efficiency, ZC proxies were introduced to evaluate all candidate architectures without training [Krishnakumar et al., 2021]. Mellor et al. [2021] estimated the separability of the minibatch into different linear regions of the output space. Abdelfattah et al. [2020] proposed ZC proxies inspired by pruning-at-initialization techniques [Lee et al., 2018; Turner et al., 2019; Tanaka et al., 2020; Wang et al., 2019]. Some works [Chen et al., 2020; Xu et al., 2021; Zhu et al., 2022] were based on neural tangent kernels [Arora et al., 2019; Du et al., 2019; Shu et al., 2022] presented a unified theoretical analysis of gradient-based training-free NAS and developed a hybrid NAS framework. 3 PRELIMINARIES 3.1 SPIKING NEURON MODEL We adopt the iterative LIF model [Wu et al., 2019; Deng et al., 2021] and the membrane potential is updated as: \[ v(t + 1) = \tau v(t) + I(t), \] where \( v(t) \) denotes the membrane potential at time step \( t \), \( \tau \) is the leaky factor, and \( I(t) \) represents the pre-synaptic inputs, which is the product of synaptic weight \( W \) and spiking input \( x(t) \). Given the threshold \( V_{th} \), the neuron fires a spike and \( v(t) \) resets to 0 when it exceeds \( V_{th} \). The spike firing and hard reset mechanism can be described as: \[ s(t + 1) = \Theta(v(t + 1) - V_{th}) \] \[ v(t + 1) = v(t + 1) \cdot (1 - s(t + 1)), \] where \( \Theta(\cdot) \) is the Heaviside step function. The output spike \( s(t + 1) \) will become the post-synaptic spike and propagate to the next layer. In this work, we set \( V_{th} \) to 1 and \( \tau \) to 0.5 for all experiments. 3.2 LOSS FUNCTION Many previous works define the loss function in SNNs by cross entropy \( L_{CE} \) [Zheng et al., 2021] or mean squared error \( L_{MSE} \) [Fang et al., 2021]. Deng et al. [2021] propose a new loss function, called TET loss, making SNNs converge quickly, which is described as: \[ L_{TET} = \frac{1}{T} \sum_{t=1}^{T} L_{CE}[O(t), y], \] where \( O(t) \) represents the pre-synaptic input of the output layer and \( y \) is the target label in classification tasks. In this work, we use TET loss for all experiments. 3.3 SURROGATE GRADIENT FUNCTION By applying the chain rule, we backpropagate the gradients of the loss \( L \) on both spatial and temporal domains [Wu et al., 2018] as follow: \[ \frac{\partial L}{\partial W} = \sum_t \frac{\partial L}{\partial s(t)} \frac{\partial s(t)}{\partial v(t)} \frac{\partial v(t)}{\partial I(t)} \frac{\partial I(t)}{\partial W}. \] However, the gradient of the spiking function, represented by \( \frac{\partial s(t)}{\partial v(t)} \), does not exist. To circumvent this obstacle, a technique known as the Surrogate Gradient (SG) approach is employed. This technique approximates the true gradients through the utilization of continuous functions. In this particular Figure 2: Concept of backward connections and results under randomly adding backward connections. The backward module contains an unsample layer and a spiking convolutional layer to project the backward spike tensors. Then, we randomly select three architectures with 1 or 2 backward connections and compare their test accuracy against the hand crafted one without backward connections. This indicates the preference of searching BCs against random adding or hand crafting them. study, we utilize the triangle curve [Bellec et al., 2018] as the surrogate gradient, which can be described as: $$\frac{\partial s(t)}{\partial v(t)} = \frac{1}{b^2} \max(0, b - |v(t) - V_{th}|),$$ where $b$ determines the triangle sharpness and is set to 1. 4 METHODOLOGY 4.1 GLOBAL BACKWARD CONNECTIONS In this section, we introduce our definition of global backward connection in SNNs. Prior arts have been using backward connections for the same LIF layer or LIF layers with the same resolution [Panda & Roy, 2017; Demin & Nekhaev, 2018; Zhang & Li, 2019; Panda et al., 2020; Yin et al., 2020; Zhao et al., 2022; Kim et al., 2022] and usually the network is shallow. This backward connection, however, is essentially a “recurrent” connection that is restricted to transmitting the information in a local manner. Moreover, it cannot be applied to the previous layers that share different feature resolutions. Therefore, we argue that the potential for backward connections has not been fully exploited. In this study, we take ResNet as the network backbone and treat each LIF layer as a port. A global backward connection can exist between any two ports, including the case where the two ports are the same. The ports in the back layer and the front layer are regarded as the output port and input port of the backward connection, respectively. We adopt fixed operations on each global backward connection (Figure 2A), including nearest interpolation, $1 \times 1$ convolution, and batch normalization, which can deal with two LIF layers with different feature resolutions. Formally, the global backward connection from any layer $j$ to layer $i (j \geq i)$ will charge the membrane potential in the layer $i$ as well, given by: $$I^{(i)}(t) = W^{(i)}s^{(i-1)}(t) + W^{(*)}\gamma(s^{(j)}(t-1)),$$ where $W^{(i)}$ represents the forward weight connection between layer $i - 1$ and layer $i$, $W^{(*)}$ is the backward weight connection, $s^{(i-1)}(t)$ denotes the output spike of layer $i - 1$ at time step $t$, and $\gamma(\cdot)$ indicates the nearest interpolation operation. By using this global backward connection, we can largely extend the possible network topology in the temporal dimension. However, we generally discern that not all global backward connections can improve the SNN performance. To demonstrate that, we randomly sample 1 or 2 backward connections to a ResNet-18 on the CIFAR-100 dataset. Figure 2B shows 3 examples as well as their final accuracy. It can be found that backward connections may lead to an accuracy decrease more than 6%, which means finding good backward connections that maximize the accuracy of SNN is non-trivial. 4.2 BCNAS-SNN To discover BCs that greatly improve the network performance of SNNs, we develop a search framework based on an evolutionary algorithm, called BCNAS-SNN. Inspired by the one-shot weight-sharing method in NAS [Guo et al., (2020); Na et al., (2022)], we use the validation accuracy as the ultimate metric for each candidate architecture. According to the similarity between different BCs, we define the evolutionary mechanism, including mutation and crossover, to efficiently search optimal BCs in potential. Meanwhile, the initial population is crucial to the results of evolutionary algorithms, which need an efficient approach to obtain. Many previous works directly use random samples and some of which adjust their population size to be very large leading to the expensive time consumption in the search process. By contrast, we propose the score-based method that quickly collects a small number of superior BCs as the initialization, which is proved to be effective in the experiments. We take the two-step search strategy for BCNAS-SNN as shown in Figure 3. First, we simultaneously leverage two different training-free scores choosing some BCs prepared for the next step. Then, we adopt an accuracy-based evolutionary algorithm to further search optimal BCs. The rest content is divided into three parts. In subsubsection 4.2.1 we define the search space for BCNAS-SNN. The score-based method and the accuracy-based evolutionary algorithm are detailed in subsubsection 4.2.2 and subsubsection 4.2.3 respectively. 4.2.1 SEARCH SPACE The design of a search space is crucial in NAS and can significantly affect the search results. In this study, we use ResNet as a representative network backbone. ResNet has a fixed number of LIF layers and thus the number of all possible backward connections is determined. To minimize the complexity and computational cost of searching, we fix the operations on each backward connection and set the maximum number of backward connections for each SNN to $b$. When the number of LIF layers is $m$, we can obtain $n = \frac{m(m+1)}{2}$ possible backward connections in total. The size of the search space $\mathcal{A}$ can be calculated as $|\mathcal{A}| = \sum_{i=0}^{b} \binom{n}{i}$. In practice, we find that setting $N$ as 2 is sufficient to boost the performance of SNNs. 4.2.2 SCORE-BASED SELECTIVE INITIALIZATION N-score & E-score. In order to efficiently find a good initial population for the evolutionary algorithm, we propose the training-free score-based method inspired by zero-shot methods [Abdelfattah et al., (2020); Montufar et al., (2014); Lopes et al., (2021)]. Recently, Krishnakumar et al. prove that different zero-shot methods do indeed compute substantial complementary information. Thus, we consider adopting a combined method to initialize the population. NASWOT [Mellor et al., (2021)] observed that the initialized network having distinctive representations across different data samples is likely to achieve higher performance and [Kim et al., (2022)] succeeded in applying it to SNNs. We utilize this approach to calculate N-score in this work. However, the computation of the N-score ignores the gradient information of the initialized network, which is important for an accurate evaluation of the architecture. And we need to leverage another zero-shot method to gain a score as the supplement of gradient information. EPE-NAS [Lopes et al., 2021] evaluated how the gradients behave with respect to the input to realize architecture evaluation without training and we adopt it to compute E-score. We simultaneously use N-score and E-score to obtain the initial population for the evolutionary algorithm. The specific calculation of the two scores is detailed in Appendix A. **Score-based Selection.** As for N-score and E-score, we cannot pre-know the ranges nor the value distributions before computing them over a search space, thus it is difficult to directly combine them. Instead, we calculate these two scores to select BCs separately and then combine their results after removing duplicate BCs. In step one of BCNAS-SNN, we first randomly choose some candidate BCs to make up the initial population pool $P_{init}$. Then we leverage N-score and E-score to select some BCs with high scores from $P_{init}$, respectively, and combine them into the temporary population pool $P_{temp}$ as the initial population for network evaluation in the next step. ### 4.2.3 ACCURACY-BASED EVOLUTIONARY SEARCH In general, training of SNNs can be more time-consuming than that of ANNs due to the additional time dimension. Thus, an efficient search algorithm is crucial for discovering the optimal BCs in SNNs. In this study, we employ the evolutionary search algorithm as it has been shown to be capable of rapidly discovering architectures that are similar to optimal ones. In terms of architecture evaluation, zero-shot methods are known to have the lowest time consumption, but they are highly dependent on network initialization, which may cause inaccurate architecture evaluation. To mitigate this, we adopted the one-shot weight-sharing method, which measures candidate BCs by the validation accuracy of the BCSNN and allows for easy implementation due to the fixed ResNet backbone in our study. The one-shot weight-sharing method in BCNAS-SNN is composed of two consecutive processes: training the super-network that contains all BCs, and searching for the optimal BCs according to the validation accuracy of the BCSNN obtained from the trained super-network. Given the trained super-network, BCNAS-SNN explores the search space by the evolutionary algorithm, throughout which there are two population pools, called the temporary population pool $P_{temp}$ and the top-$k$ population pool $P_{top}$, respectively. As described in subsection 4.2.2, we efficiently obtain the initialized $P_{temp}$ without training through the score-based selection. Then, we compute the validation accuracy of all architectures in $P_{temp}$ based on the trained super-network and choose $k$ architectures with the highest accuracy to make up the $P_{top}$. After these steps, $P_{temp}$ is updated by generating architectures through the evolutionary mechanism, including mutation and crossover. **Mutation** A mutation at the placement of the BC is equivalent to a mutation at the positions of its ports. In order to discover the optimal BCs fast, we reduce the degree of freedom for mutations. We do not allow both ports of a BC to mutate at the same time, and the mutation position of each port is limited. Specifically, when a BC is mutated, only one port mutates to other closed ports with different probabilities, which is inversely proportional to the distance between them, and the position of the mutated port can not change by more than 2, as shown in Figure 4. Figure 5: **Crossover in the evolutionary algorithm.** When the number of backward connections is two, the two backward connections swap their input ports to explore the search space. Table 1: Comparisons with other exiting works on CIFAR10/100 datasets. | Method | Use NAS | Architecture | Timestep1 | Accuracy1 | Timestep2 | Accuracy2 | |-----------------|---------|--------------------|-----------|-----------|-----------|-----------| | STRP+DHS | ✗ | ResNet-19 | 4 | 93.92 | 6 | 93.16 | | Dipple | ✗ | ResNet-18 | 4 | 93.66±0.05| 6 | 94.25±0.07| | TEL | ✗ | ResNet-19 | 4 | 94.44±0.08| 6 | 94.50±0.07| | RecbresSNN | ✗ | ResNet-19 | 4 | 95.53±0.05| 6 | 95.55±0.05| | GLIF | ✗ | ResNet-19 | 4 | 94.85±0.07| 6 | 95.03±0.08| | IM-Loss | ✗ | ResNet-19 | 4 | 95.40±0.08| 6 | 95.49±0.05| | AutoSNN | ✓ | AutoSNN(C=28) | - | - | 8 | 95.31 | | SNASNet | ✓ | SNASNet-Bw | 5 | 93.73±0.32| 8 | 94.12±0.25| | SpikeDHS | ✓ | SpikeDHS-CLA(n×3c5)| - | - | 6 | 95.50±0.03| | BCNAS-SNN | ✓ | ResNet-18 | 4 | 94.49±0.08| 6 | 94.91±0.09| | BCNAS-SNN | ✓ | ResNet-19 | 4 | 95.27±0.08| 6 | 95.67±0.04| | Method | Use NAS | Architecture | Timestep1 | Accuracy1 | Timestep2 | Accuracy2 | |-----------------|---------|--------------------|-----------|-----------|-----------|-----------| | Dipple | ✗ | ResNet-18 | 4 | 73.35±0.14| 6 | 74.24±0.10| | TEL | ✗ | ResNet-19 | 4 | 74.47±0.15| 6 | 74.72±0.28| | RecbresSNN | ✗ | ResNet-19 | 4 | 74.10±0.13| | | | GLIF | ✗ | ResNet-19 | 4 | 77.05±0.14| 6 | 77.35±0.07| | IM-Loss | ✗ | VGG-16 | 5 | 70.18±0.09| | | | AutoSNN | ✓ | AutoSNN(C=64) | - | - | 8 | 69.16 | | SNASNet | ✓ | SNASNet-Bw | 5 | 73.04±0.36| | | | SpikeDHS | ✓ | SpikeDHS-CLA(n×3c1)| - | - | 6 | 76.25±0.10| | BCNAS-SNN | ✓ | ResNet-18 | 4 | 74.85±0.12| 6 | 75.48±0.24| | BCNAS-SNN | ✓ | ResNet-19 | 4 | 77.12±0.11| 6 | 78.59±0.25| Tiny-Imagenet | Method | Use NAS | Architecture | Timestep1 | Accuracy1 | |-----------------|---------|--------------------|-----------|-----------| | AutoSNN | ✓ | AutoSNN(C=64) | 8 | 46.79 | | SNASNet | ✓ | SNASNet-Bw | 5 | 54.60±0.48| | BCNAS-SNN | ✓ | ResNet-18 | 3 | 63.43±0.07| **Crossover** There are two situations for crossover. When the number of backward connections is only one, crossover turns into addition, which randomly adds a new BC different from the existing one to the previous architecture. Otherwise, the two backward connections swap their input ports, as depicted in Figure 5. For mutation and crossover, the parent architecture is randomly sampled from $P_{top}$ and the generated architectures are put into $P_{temp}$. When the number of architectures generated by mutation and crossover does not reach the size of $P_{temp}$, we use random sampling to supplement. The architectures in $P_{temp}$ are evaluated on the validation set. If the validation accuracy of the evaluated architectures is higher than those in $P_{top}$, $P_{top}$ is updated. ## 5 EXPERIMENTS To demonstrate the effectiveness of our proposed BCNAS-SNN search framework, we conduct extensive experiments on CIFAR [Krizhevsky et al., (2009)], Tiny-ImageNet [Deng et al., (2009)] and CIFAR10DVS [Li et al., (2017)] with small time steps. Details regarding these datasets are provided in Appendix B. In subsection 5.2, we compare our results with other existing state of the art and in subsection 5.3, we carry out the ablation study to evaluate different aspects of our methods. ### 5.1 IMPLEMENTATION DETAILS In the phase of super-network training, all training datasets are divided into 8:2 for $D_{train}$ and $D_{val}$. Adamw optimizer [Loshchilov & Hutter, (2018)] is adopted for both super-network training and searched network training, of which the learning rate is 0.01 and the weight decay is 0.02. Meanwhile, we use cosine learning rate scheduling [Loshchilov & Hutter, (2016)]. We train all super-networks for Table 2: Comparisons on CIFAR10DVS dataset. AutoSNN sets time steps as 20 and others use 10 time steps. | Method | Architecture | Accuracy | |-----------------|--------------|----------| | STBP-tdBN | ResNet-19 | 67.8 | | Dspike | ResNet-18 | 75.4±0.05| | TET | VGGSSNN | 83.17±0.15| | TET | ResNet-18 | 82.4±0.14| | RecBio | ResNet-19 | 72.42±0.06| | IM-Loss | ResNet-19 | 72.60±0.08| | AutoSNN | AutoSNN(C=16)| 72.50 | | BCNAS-SNN | ResNet-18 | **82.60±0.13** | 1 Self-implementation results. Table 3: Comparison between SNNs and BCSNNs. The accuracy of BCSNNs is in the brackets. | Dataset | Architecture | $T = 4$ | $T = 6$ | |-----------|--------------|--------|--------| | CIFAR10 | ResNet-18 | 94.01 (94.58) | 94.51 (95.04) | | CIFAR10 | ResNet-19 | 94.88 (95.18) | 95.48 (95.73) | | CIFAR10 | ResNet-18$^1$ | 71.39 (73.33) | 73.86 (74.79) | | CIFAR100 | ResNet-18 | 73.38 (74.97) | 74.71 (75.72) | | CIFAR100 | ResNet-19 | 75.24 (77.18) | 76.02 (78.84) | 1 Self-implementation results. 100 epochs and train searched BCSNNs for 300, 300, 300, and 200 epochs on CIFAR10, CIFAR100, CIFAR100DVS, and Tiny-Imaginet, respectively. As for the search algorithm, we set parameters as follows: the maximum number of search iterations $I$ is 20, the size of the initial pool $P_{init}$ is 2000, the size of the top-$k$ pool is 10, i.e., $k$ equals 10, the size of the temporal pool $P_{temp}$ is 20, the maximum number of architectures generated by mutation or crossover is 10, the mutation probability $\alpha$ is 0.2, and the crossover probability $\beta$ is 0.5. 5.2 Comparisons to Exiting Works CIFAR10/100. ResNet-18 [He et al., 2016] and ResNet-19 [Zheng et al., 2021] are adopted for these two datasets. As shown in Table 1, we report the mean and standard deviation of 3 runs under different random seeds. On CIFAR10, our searched BCSNN based on ResNet-18 achieves better results compared to Dspike and the best BCSNN based on ResNet-19 realizes the SOTA result when $T = 6$. Besides, the BCNAS-SNN search framework demonstrates a more excellent ability on CIFAR100. For both $T = 4$ and $T = 6$, the searched BCSNN based on ResNet-19 obtains SOTA results and has a 1.24% accuracy increment compared to GLIF, which achieved the SOTA result before when $T = 6$. Tiny-ImageNet. Compared to CIFAR100, Tiny-ImageNet is more challenging with more training images belonging to 200 classes. In previous studies, there are two methods testing on this dataset, both of which use NAS methods to discover better architectures for SNNs. In our work, the searched BCSNN under ResNet-18 architecture outperforms the best of them by 8.83% accuracy, illustrating the superiority of the BCNAS-SNN search framework. CIFAR10DVS. Different from CIFAR10/100 and Tiny-ImageNet, CIFAR10DVS is a neuromorphic dataset converted from CIFAR10, suffering much more noise than static datasets, which makes the well-trained SNN easy to overfit. Compared to the original SNN with TET loss, our searched BCSNN obtains slightly higher accuracy, which illustrates the appropriate backward connections can make better use of the time-domain information of SNNs. [Deng et al., 2021] achieve the SOTA result and we think VGGSSNN may be more suitable than ResNet for classification tasks on neuromorphic datasets such as CIFAR10DVS. 5.3 Ablation Study In this section, we demonstrate the effectiveness of our proposed BCNAS-SNN search framework through an ablation study, as shown in Table 4. We use ResNet-18 as the network backbone and search for BCs on the CIFAR100 dataset. All the experiments are conducted with 4 and 6 time steps. SNNs vs. BCSNNs. We first compare the performance of searched BCSNNs with that of the primitive SNNs as shown in Table 3. The accuracy of BCSNNs is much better than the primitive SNNs. Especially on CIFAR100 dataset, we report 78.84% top-1 accuracy of ResNet-19 with optimal BCs, which is 2.82% better than the one without BCs. We conclude that the appropriate BCs can improve the SNNs’ performance greatly and the results demonstrate the effectiveness of our search methods. N-Score & E-Score for selective initialization. In step one of BCNAS-SNN, we leverage two metrics to select architectures from $P_{init}$ to make up initial $P_{temp}$. To verify the effectiveness of Table 4: The results of our ablation study. Our method increases the accuracy of the SNN by 1.59% and 1.01% when $T = 4$ and $T = 6$, respectively, by adding appropriate BCs, which achieves the greatest improvement among all the experiments. The baseline refers to the experiment of the SNN without BCs. Exp is short for the experiment. The details of step 1 and step 2 are illustrated in Figure 3. | Experiment | Method in Step1 | Method in Step2 | Search Way in Step2 | Size of $P_{init}$ | $T = 4$ | $T = 6$ | |------------|-----------------|-----------------|----------------------|--------------------|--------|--------| | Baseline | | | | | 73.38 | 74.71 | | BCNAS-SNN | E-score+N-score | One-Shot | Evolutionary | 2000 | 74.97 | 75.72 | | Exp1 | N-score | One-Shot | Evolutionary | 2000 | 74.44 | 75.57 | | Exp2 | E-score | One-Shot | Evolutionary | 2000 | 74.51 | 75.54 | | Exp3 | E-score+N-score | One-Shot | Random | 2000 | 74.40 | 75.30 | | Exp4 | E-score+N-score | N-score | Evolutionary | 2000 | 73.56 | 74.75 | | Exp5 | E-score+N-score | E-score | Evolutionary | 2000 | 73.94 | 74.98 | | Exp6 | E-score+N-score | One-Shot | Evolutionary | 1000 | 74.26 | 75.27 | | Exp7 | E-score+N-score | One-Shot | Evolutionary | 500 | 73.51 | 74.97 | | Exp8 | E-score+N-score | One-Shot | Evolutionary | 0 | 72.58 | 74.72 | the proposed method, we make initial $P_{temp}$ composed of architectures with high scores calculated only by N-score or E-score. From Exp1 and Exp2 in Table 4, we can learn that all of these methods obtain good performance, but the combined method gets a higher accuracy. It demonstrates that the combined zero-shot method is less fortuitous than the single zero-shot method. **Evolutionary Search vs. Random Search.** To verify the effectiveness of the evolutionary algorithm, we keep the initial $P_{temp}$ and use random search instead of the evolutionary search in the step two of BCNAS-SNN. From Exp3 in Table 4, Obviously, it is more efficient to adopt the evolutionary algorithm than random search. Meanwhile, the results of these experiments illustrate our designed evolutionary search algorithm is effective. **One-Shot vs. Zero-Shot.** In this work, considering the time consumption, we adopt zero-shot methods in step one of BCNAS-SNN. Here, the one-shot weight-sharing method and zero-shot methods are discussed within step two of BCNAS-SNN. We test how the results change when adopting zero-shot methods in the evolutionary process. Exp4 and Exp5 in Table 4 show that the one-shot weight-sharing method obtains better results and the accuracy gap between it and zero-shot methods is obvious. Through these experiments, we conclude that it is difficult to find an optimal BCSNN only according to zero-shot methods, although they can save time a lot. And the adoption of the one-shot weight-sharing method is necessary. **Effect of Search Budget.** We investigate the effect of the search budget, which refers to the number of evaluated architectures throughout the search process. For simplicity, we fix the maximum number of search iterations $I$ and accomplish some experiments with different sizes of $P_{init}$. From Exp6 to Exp8 in Table 4, we can conclude that the accuracy of the final searched BCSNNs will decrease as the number of samples decreases. In particular, when the size of $P_{init}$ is reduced to 0, i.e., step one of BCNAS-SNN is removed and initial $P_{temp}$ is randomly sampled, the accuracy drops a lot, even lower than the baseline. Thus in our work, keeping enough samples in $P_{init}$ is the key to final network performance. ## 6 Conclusion This paper explores the potential benefits of incorporating BCs in SNNs, which can potentially improve performance by providing extra-temporal information. We find that not all BCs are effective and develop the BCNAS-SNN search framework to identify optimal BCs through a two-step search strategy. Our searched BCSNNs have yielded state-of-the-art results on the CIFAR10/100 and Tiny-ImageNet datasets. Interestingly, we find the searched BCs prefer to be connected to the first two layers, which may motivate future architecture design of SNNs. Extensive experiments demonstrate the effectiveness and efficiency of our proposed method. Currently, considering energy consumption and time cost, we apply the fixed operations and limit the number of BCs, and a larger search space remains to be discussed and discovered in future work. REFERENCES Mohamed S Abdelfattah, Abhinav Mehrotra, Łukasz Dudziak, and Nicholas Donald Lane. Zero-cost proxies for lightweight nas. In *International Conference on Learning Representations*, 2020. Filipp Akopyan, Jun Sawada, Andrew Cassidy, Rodrigo Alvarez-Icaza, John Arthur, Paul Merolla, Nabil Imam, Yutaka Nakamura, Pallab Datta, Gi-Joon Nam, et al. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. *IEEE transactions on computer-aided design of integrated circuits and systems*, 34(10):1537–1557, 2015. Sanjeev Arora, Simon Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. In *International Conference on Machine Learning*, pp. 322–332. PMLR, 2019. Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architectures using reinforcement learning. *arXiv preprint arXiv:1611.02167*, 2016. Guillaume Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj, Robert Legenstein, and Wolfgang Maass. A solution to the learning dilemma for recurrent. Guillaume Bellec, Darjan Salaj, Anand Subramoney, Robert Legenstein, and Wolfgang Maass. Long short-term memory and learning-to-learn in networks of spiking neurons. *Advances in neural information processing systems*, 31, 2018. Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once for all: Train one network and specialize it for efficient deployment. In *International Conference on Learning Representations*, 2020. Kaiwei Che, Luziwei Leng, Kaixuan Zhang, Jianguo Zhang, Qinghu Meng, Jie Cheng, Qinghai Guo, and Jianxing Liao. Differentiable hierarchical and surrogate gradient search for spiking neural networks. In *Advances in Neural Information Processing Systems*. Wuyang Chen, Xinyu Gong, and Zhangyang Wang. Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective. In *International Conference on Learning Representations*, 2020. Dennis V Christensen, Regina Dittmann, Bernabe Linares-Barranco, Abu Sebastian, Manuel Le Gallo, Andrea Redaelli, Stefan Slesazeck, Thomas Mikolajick, Sabina Spiga, Stephan Menzel, et al. 2022 roadmap on neuromorphic computing and engineering. *Neuromorphic Computing and Engineering*, 2(2):022501, 2022. Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, et al. Loihi: A neuromorphic manycore processor with on-chip learning. *Ieee Micro*, 38(1):82–99, 2018. Vyacheslav Demin and Dmitry Nekhaev. Recurrent spiking neural network learning based on a competitive maximization of neuronal activity. *Frontiers in neuroinformatics*, 12:79, 2018. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Shikuang Deng, Yuhang Li, Shanghang Zhang, and Shi Gu. Temporal efficient training of spiking neural network via gradient re-weighting. In *International Conference on Learning Representations*, 2021. Xuanyi Dong and Yi Yang. Nas-bench-201: Extending the scope of reproducible neural architecture search. In *International Conference on Learning Representations*, 2019. Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In *International conference on machine learning*, pp. 1675–1685. PMLR, 2019.
RtDok9eS3s
From Fig. 19, the pre-LN block performs worse when setting values and projections to identity. An intuition is provided, but I am not totally convinced by this intuition. In fact, I see the performance difference is almost negligible. It seems OK to set the values and projections to identity in the standard attention block with skip connection.
SIMPLIFYING TRANSFORMER BLOCKS Bobby He & Thomas Hofmann* Department of Computer Science, ETH Zurich ABSTRACT A simple design recipe for deep Transformers is to compose identical building blocks. But standard transformer blocks are far from simple, interweaving attention and MLP sub-blocks with skip connections & normalisation layers in precise arrangements. This complexity leads to brittle architectures, where seemingly minor changes can significantly reduce training speed, or render models untrainable. In this work, we ask if the standard transformer block can be simplified? Combining signal propagation theory and empirical observations, we motivate modifications that allow many block components to be removed with no loss of training speed, including skip connections, projection or value parameters, sequential sub-blocks and normalisation layers. In experiments on both autoregressive decoder-only and BERT encoder-only models, our simplified transformers emulate the per-update convergence speed and performance of standard transformers, while enjoying 16% faster training throughput, & using 15% fewer parameters. 1 INTRODUCTION The transformer architecture (Vaswani et al., 2017) is arguably the workhorse behind many recent successes in deep learning. A simple way to construct a deep transformer architecture is by stacking multiple identical transformer “blocks” one after another in sequence. Each block, however, is more complicated and consists of many different components, which need to be combined in specific arrangements in order to achieve good performance. Surprisingly, the base transformer block has changed very little since its inception, despite attracting the interest of many researchers. In this work, we study whether the standard transformer block can be simplified. More specifically, we probe the necessity of several block components, including skip connections, projection/value matrices, sequential sub-blocks and normalisation layers. For each considered component, we ask if it can be removed without loss of training speed (both in terms of per-update step & runtime), and what architectural modifications need to be made to the transformer block in order to do so. We believe the problem of simplifying transformer blocks without compromising training speed is an interesting research question for several reasons. First, modern neural network (NN) architectures have complex designs with many components, and it is not clear the roles played by these different components in NN training dynamics, nor how they interact with each other. This is particularly pertinent given the existing gap between theory and practice in deep learning, where theorists working to understand the mechanisms of deep learning often only consider simplified architectures due to convenience, not necessarily reflective of modern architectures used in practice. Simplifying the NN architectures used in practice can help towards bridging this divide. On a related theoretical note, our work highlights both strengths and current limitations of signal propagation: a theory that has proven influential due to its ability to motivate practical design choices in deep NN architectures. Signal propagation (Poole et al., 2016; Schoenholz et al., 2017; Hayou et al., 2019) studies the evolution of geometric information in an NN at initialisation, captured through inner products of layerwise representations across inputs, and has inspired many impressive results in training deep NNs (Xiao et al., 2018; Brock et al., 2021; Martens et al., 2021; Zaidi et al., 2023). However, the current theory only considers a model at initialisation, and often considers only the initial forward pass. As such, signal propagation at present is unable to shed light on many intricacies of deep NN training dynamics, for example the benefits of skip connections for training speed. Though signal propagation is crucial in motivating our modifications, we would not have arrived at our simplified transformer blocks from theory alone, and relied also on empirical insights. *Correspondence to: bobby.he@inf.ethz.ch. Figure 1: Comparison between different Transformer blocks. (Left) The standard Pre-LN block. (Top Right) Our most simplified block. (Bottom Right) The parallel block [Zhao et al., 2019; Wang & Komatsuzaki, 2021]. Like the parallel block, our block eschews the need for sequential sub-blocks, but we additionally remove all skip connections and normalisation layers, as well as value and projection parameters. Here, $\otimes$ denotes a matrix multiplication, and $\oplus$ denotes a (potentially weighted) sum. Finally, on the practical side, given the cost of training and deploying large transformer models nowadays, any efficiency gains in the training and inference pipelines for the transformer architecture represent significant potential savings. Simplifying the transformer block by removing non-essential components both reduces the parameter count and increases throughput in our models. In particular, we show that it is possible to remove skip connections, value parameters, projection parameters and sequential sub-blocks, all while matching the standard transformer in terms of training speed and downstream task performance. As a result, we reduce parameter count by up to 16% and observe throughput increases of 16% at both train and inference time. Our starting point to simplify Transformer blocks is [He et al., 2023], who show that respecting signal propagation principles allows one to train deep Transformers without skip connections or normalisation layers, but at significantly reduced convergence speeds per parameter update. We first show that regulating the updates to values and projection parameters (Sec. 4.1), or in fact removing them entirely (Sec. 4.2), improves the performance of skipless attention sub-blocks, and recovers the lost per-update training speed reported by [He et al., 2023]. This removes half of the parameters and matrix-multiplications in the attention sub-block. In Sec. 4.3, we show our simplifications combine profitably with parallel sub-blocks [Zhao et al., 2019; Wang & Komatsuzaki, 2021], allowing us to remove all remaining skip connections and sequential sub-blocks without compromising per-update training speed, whilst further boosting the throughput increase to be 16%, in our implementation. Finally, in Sec. 5 we show that our simplified blocks improve when scaled to larger depths, work well in both encoder-only and decoder-only architectures, and that our findings also hold when scaling training length. We conclude with a discussion of limitations and future work in Sec. 6. 2 RELATED WORK Simplifying deep NNs by removing block components has received a lot of attention, both in transformers and other architectures. In these works, signal propagation theory often acts as inspiration. For a pair of inputs $x, x'$, mapped to a pair of representation/activations vectors $x_l, x'_l \in \mathbb{R}^d$ at layer $l$, signal propagation theory studies the evolution of activation inner products $\frac{1}{\sqrt{d}} x_l^\top x'_l, \frac{1}{\sqrt{d}} x_l^\top x_l, \frac{1}{\sqrt{d}} x'_l^\top x'_l$ at initialisation, which can be tracked with their large $d$ limits [Lee et al., 2018; Matthews et al., 2018; Yang (2019)]. Several pathologies afflicting poorly designed deep NNs can be identified as a result [Schoenholz et al., 2017; Hayou et al., 2019; Yang et al., 2019; Dong et al., 2021; Martens et al., 2021]. For example, the activation norms $\frac{1}{\sqrt{d}} \|x_l\|$ may blow up or vanish, or the cross products $\frac{1}{\sqrt{d}} x_l^\top x'_l$ may converge to a value independent of the inputs $x, x'$ at large $l$, in which case deeper layers of the model are unable to identify different inputs. Avoiding such degeneracies is important to allow for good training dynamics and generalisation in deep NNs [Balduzzi et al., 2017; Xiao et al., 2018, 2020; Hayou et al., 2021; Martens et al., 2021; Noci et al., 2022]. It has been shown that judicious use of weight initialisations and architectural tools, like skip connections and normalisation layers, can improve signal propagation degeneracies and the trainability of deep NNs. Such considerations have motivated principled modifications with simpler architectures. De & Smith (2020) show that an implicit mechanism of Pre-LN skip connections is to downweight the residual branch relative to the skip branch, leading to better signal propagation. They also show that explicitly downweighting the residual branch allows normalisation layers to be removed without affecting performance. The idea of downweighting residuals for improved signal propagation & trainability has been studied extensively in the literature (Zhang et al., 2018; Hanin & Rolnick, 2018; Tarnowski et al., 2019; Zhang et al., 2019; Arpit et al., 2019; Xu et al., 2020; Bachlechner et al., 2021; Touvron et al., 2021; Hayou et al., 2021; Hayou & Yang, 2023; Martens et al., 2021; Davis et al., 2021; Noci et al., 2022; Wang et al., 2022a; Huang et al., 2020; Wang et al., 2022b). For skip connections (He et al., 2016), it has been shown that transforming non-linear activation functions in MLPs and CNNs to be more linear according to a given deep architecture can enable good signal propagation even without skip connections (Martens et al., 2021; Zhang et al., 2022; Li et al., 2022). He et al. (2023) apply similar considerations to the self-attention mechanism, where the key insight is that attention matrices need to be more identity-like in order to prevent signal degradation in skipless transformers. However, these works find that skipless architectures suffer from significant losses in training speed compared to their residual counterparts, when using standard optimisers like SGD or Adam. Such differences were not observed with stronger optimisers like K-FAC (Martens & Grosse, 2015) on CNNs, and this inability to explain training phenomena highlights a current limitation of signal propagation theory. Ding et al. (2021, 2023) design a CNN, RepVGG, that can be trained like a residual architecture for fast per-update convergence, but reparameterised to be skipless at test time for significantly higher inference throughput. This reparameterisation is related to our considerations of value and projection parameters in Sec. 4. Many works have considered simplifications or improvements specific to the transformer. Most relevant to our work is the parallel block (Zhao et al., 2019; Wang & Komatsuzaki, 2021) (pictured Fig. 1 bottom right), which computes the MLP and attention sub-blocks in parallel for efficiency gains, with minimal performance loss. Trockman & Kolter (2023) observe that the product of value and projection parameters often has a large identity component in trained transformers, and design an initialisation mimicking this to improve performance in standard transformers on small datasets. We find these matrices can be fixed to the identity without loss of performance, which removes them from our simplified architecture. Other works have considered reducing the frequency of MLP sub-blocks (Sridhar et al., 2022; Pires et al., 2023) or efficient replacements to softmax attention (Katharopoulos et al., 2020; Schlag et al., 2021; Choromanski et al., 2021). Sukhbaatar et al. (2019) remove the MLP by integrating it into the attention sub-block, augmented with persistent memory. 3 PRELIMINARIES A deep transformer architecture of depth $L$ is formed by sequentially stacking $L$ transformer blocks. The most common block is Pre-LN, depicted in Fig. 1 (left), which we treat as a baseline for comparing training speed, both in terms of per-update and runtime. It differs from the original Post-LN block only in the position of the normalisation layers relative to the skip connections, but is more popular as the Post-LN block suffers from poor training stability and signal propagation in deep layers (Xiong et al., 2020; Liu et al., 2020; Noci et al., 2022; He et al., 2023). Transformer blocks take representations of sequences as inputs. For an input sequence representation $X_{in} \in \mathbb{R}^{T \times d}$, with $T$ tokens and dimension $d$, the Pre-LN block outputs $X_{out}$, where: $$X_{out} = \alpha_{FF} \hat{X} + \beta_{FF} \text{MLP(Norm}(\hat{X})),$$ where $\hat{X} = \alpha_{SA} X_{in} + \beta_{SA} \text{MHA(Norm}(X_{in}))$. (1) with scalar gain weights $\alpha_{FF}, \beta_{FF}, \alpha_{SA}, \beta_{SA}$ fixed to 1 by default. Here, “MHA” stands for Multi-Head Attention (detailed below), and “Norm” denotes a normalisation layer (Ba et al., 2016; Zhang & Serre, 2019). In words, we see that the Pre-LN transformer block consists of two sequential sub-blocks (one attention and one MLP), with normalisation layers and residual connections for both sub-blocks, and crucially the normalisation layers are placed within the residual branch. The MLP is usually single hidden-layer, with hidden dimension that is some multiple of $d$ (e.g. 4 (Vaswani et al., 2017) or 8/3 (Touvron et al., 2023)), and acts on each token in the sequence independently. The MHA sub-block allows tokens to share information between one another using self-attention. For input sequence $X$, the self-attention mechanism outputs: \[ \text{Attn}(X) = A(X)XW^V, \quad \text{where } A(X) = \text{Softmax} \left( \frac{1}{\sqrt{d_k}} XW^QW^K^\top X^\top + M \right), \] where \(W^Q, W^K \in \mathbb{R}^{d \times d_k}\) and \(W^V \in \mathbb{R}^{d \times d_v}\) are trainable query, key and value parameters respectively. Here, the attention matrix \(A(X) \in \mathbb{R}^{T \times T}\) can be thought of as allowing different tokens to “mix” with each other. \(M \in \mathbb{R}^{T \times T}\) is a mask taking values in \(\{0, -\infty\}\) that depend on the modelling task. For causal auto-regressive transformers like GPT, \(M_{i,j} = 0\) iff \(i > j\), which prevents a token from obtaining information from future tokens. In bidirectional models like BERT, masking is typically applied at the token level and not in the attention mechanism (i.e. \(M\) is the zero matrix). The Multi-Head Attention name arises because it is typical in practice to apply self-attention on \(H\) different “heads” (with independent parameters) with \(d_v = d_k = \frac{d}{H}\), as follows: \[ \text{MHA}(X) = \text{Concat}(\text{Attn}_1(X), \ldots, \text{Attn}_H(X))W^P, \] where \(W^P \in \mathbb{R}^{d \times d}\) denotes a trainable square projection matrix that combines different attention heads. If we let \(W^V_n\) denote the value parameters for head \(n\), then the concatenated value weights \(W^V = \text{Concat}(W^V_1, \ldots, W^V_H) \in \mathbb{R}^{d \times d}\) can be viewed as a square matrix. One of our key findings, in Sec.4.2, is to show that fixing the value and projection parameters, \(W^V\) and \(W^P\), to the identity matrix significantly improves per-update training speed in skipless transformer blocks (to speeds matching or even outperforming the standard Pre-LN block), whilst simultaneously significantly reducing the parameter count and matrix-multiplication FLOPs required, thus increasing throughput. ### 4 Simplifying Transformer Blocks We now describe how we arrive at our simplest Transformer block, Fig.1 (top right), starting from the Pre-LN block, using a combination of signal propagation theory and empirical observations. Each subsection here will remove one block component at a time without compromising training speed, and we aim to provide an intuitive account of our progress in simplifying the Pre-LN block. All experiments in this section use an 18-block 768-width causal decoder-only GPT-like model on the CodeParrot dataset,\(^1\) which is sufficiently large that we are in a single epoch regime with minimal generalisation gap (Fig.2), allowing us to focus on training speed. We provide depth scaling, and non-causal encoder-only, experiments, in Sec.5.\(^2\) We use a linear decay learning rate (LR) schedule\(^3\) with AdamW (\(Loshchilov & Hutter, 2017\)), with linear warmup for the first 5% steps. The maximum LR is tuned on training loss, using a logarithmic grid. Additional experimental details are in App.D. #### 4.1 Removing the Attention Sub-block Skip Connection We first consider a skipless attention sub-block, whose output has the simple interpretation of adding, to each token, other token representations according to the attention matrix. In the notation of Eq. (1), this corresponds to \(\alpha_{SA} = 0\). Naively removing the attention skip leads to a signal degeneracy called rank collapse (\(Dong et al., 2021\)), which harms trainability (\(Noci et al., 2022\)). **Setup** (\(He et al., 2023\)) outline modifications needed to the self-attention mechanism in order to correct these signal degeneracies at large depths, and train such deep skipless networks for the first time. One method they introduce, Value-SkipInit, modifies the self-attention matrix to compute: \[ A(X) \leftarrow (\alpha I_T + \beta A(X)) \] with trainable scalars \(\alpha, \beta\) initialised to 1 and 0 respectively, and \(I_T \in \mathbb{R}^{T \times T}\) is the identity matrix. The key insight here is to initialise the self-attention matrix to have a dominant identity component that encourages a token to attend to itself more relative to other tokens, much in the same way that a Pre-LN skip upweights the skip branch relative to the residual branch for good signal propagation at large depths (\(De & Smith, 2020\)). We point out that these considerations only apply at initialisation. \(Noci et al., 2023\) propose an extension, Shaped Attention, also motivated by signal propagation: \[ A(X) \leftarrow (\alpha I_T + \beta A(X) - \gamma C). \] --- \(^1\)Our setting is taken from [https://huggingface.co/learn/nlp-course/chapter7/6](https://huggingface.co/learn/nlp-course/chapter7/6) \(^2\)We found linear decay to slightly outperform cosine decay for both our models and baselines (c.f. Fig.12). Here, \( \alpha, \beta, \gamma \) are trainable, and \( C \) is a constant (not trained) centering matrix, set to be equal to the values of \( A \) when the query-key dot product \( \frac{1}{\sqrt{d_k}} X W^Q W^K^\top X^\top \) is zero.\(^3\) Like He et al. (2023), we initialise queries \( W^Q = 0 \), which exactly zeros the query-key dot product at initialisation. Then, \( \beta = \gamma \) means that \( \beta A(X) - \gamma C = 0 \) at initialisation, and \( \alpha = 1 \) ensures a dominant identity component, and good signal propagation. All et al. (2023) also centre attention and show it helps prevent oversmoothing in vision transformers and graph NNs. We found Shaped Attention, Eq. (5), to slightly outperform Eq. (4) (c.f. Fig. 13), and use it in our experiments on skipless attention sub-blocks, with \( \beta = \gamma = \alpha = 1 \) at initialisation unless stated otherwise. We also use head-dependent scalars in Eq. (5), \( \alpha_h, \beta_h \) and \( \gamma_h \), which provided a small additional performance boost. One final important implementation detail is that for any skipless block we explicitly downweight the MLP branch by initialising trainable \( \beta_{FF} = O(\frac{1}{\sqrt{L}}) < 1 = \alpha_{FF} \). This is motivated through signal propagation theory (c.f. Stable ResNet, Hayou et al. (2021)), and accounts for the fact that removing skip connections (in either MLP or MHA sub-block) reduces the implicit downweighting (2020). For the depth \( L = 18 \) networks in this section, we initialise \( \beta_{FF} = 0.1 \). **Recovering lost training speed** Despite allowing skipless transformers to train for the first time, He et al. (2023) reported a significant loss of training speed per step compared to the Pre-LN block. We verify this in Fig. 2. To recover the lost training speed without attention skips, note that identity attention matrices make a deep transformer with no MLP sub-blocks act like a deep skipless linear NN at initialisation:\(^4\) \( f(X) = X \prod_{l=1}^{L} (W^V_l W^P_l) \), where \( W^V, W^P \) are the value and projection weights in layer \( l \). In He et al. (2023), they initialise \( W^V, W^P \) to be independent random orthogonal matrices to avoid signal degeneracies from Gaussian initialisations (Saxe et al., 2013; Hu et al., 2020; Meterez et al., 2023). It is known that such deep skipless networks train slower than their residual counterparts (Martens et al., 2021). Moreover, it is also known that Pre-LN skips downweight residual branches (De & Smith, 2020), which is equivalent to reduced learning rates & downscaled parameter updates from initialisation in linear layers (e.g. Ding et al., 2023); we outline and empirically verify this duality in App. A. This motivates us to study a reparameterisation of the value/projection weights \( W^V, W^P \): \[ W^V = \alpha_V W^V_{\text{init}} + \beta_V \Delta W^V, \quad \text{and} \quad W^P = \alpha_P W^P_{\text{init}} + \beta_P \Delta W^P, \] with “skip” \( W^V_{\text{init}} \) fixed to be random orthogonal to preserve the signal propagation achieved at initialisation, and “residual” \( \Delta W^V \) trainable and initialised to zero. We consider downweighting the residuals with fixed \( \beta_V \leq \alpha_V = 1 \), which biases the matrices \( W^V, W^P \) to stay closer to initialisation, and would expect \( \beta_V = O(\frac{1}{\sqrt{T}}) \) to recover the benefits of skip connections (Hayou et al., 2021).\(^5\) Similar considerations apply for \( W^P_{\text{init}}, \Delta W^P, \alpha_P, \beta_P \). In Fig. 3, we find as expected that using smaller \( \beta_V \) and \( \beta_P \) with this reparameterisation, Eq. (6), already restores much of the training speed loss in skipless attention-blocks, using orthogonally initialised \( W^V_{\text{init}}, W^P_{\text{init}} \). To close this gap further, we note that from a signal propagation perspective, initialising \( W^V_{\text{init}}, W^P_{\text{init}} \) to be the identity matrix is equivalent to orthogonal initialisation when the --- \(^3\)For example, when there is no masking, \( C \) becomes the uniform \( T \times T \) stochastic matrix: \( \frac{1}{T} \mathbb{I} \mathbb{I}^\top \) \(^4\)We set aside the MLP sub-block here for simplicity, but point out that all of our experiments use MLPs so our findings carry over to the full setting. \(^5\)Although the initial forward pass is identical regardless of \( \beta_V \), due to zero initialised \( \Delta W^V \). attention sub-block is skipless. With identity initialisation $W_{\text{init}}^V = W_{\text{init}}^P = I_d$ we see a consistent improvement over orthogonal initialisation, which essentially matches the Pre-LN block. One thing we can conclude from this experiment, is that restricting the updates to the values and projections from their initialisation replicates the effects of the attention sub-block skip connection, and recovers the lost per-update training speed. We investigate the difference in Fig. 3 of performance between identity and random orthogonal in the appendix (Fig. 15). 4.2 Removing Value and Projection Parameters In fact, we can also conclude from Fig. 3 that it is possible to completely remove the value and projection parameters $W^V, W^P$ with minimal loss of per-update training speed. Namely, when $\beta_V = \beta_P = 0$ and identity-initialised $W_{\text{init}}^V = W_{\text{init}}^P = I$, we essentially match the Pre-LN block performance after equal numbers of training steps. In this case, we have $W^V = W^P = I$ throughout training, i.e. the values and projection parameters are identity. To further verify this surprising observation, we consider reparameterised $W^V, W^P$, as in Eq. (6) with identity $W_{\text{init}}^V, W_{\text{init}}^P$, but now trainable scalars $\alpha_V, \beta_V, \alpha_P, \beta_P$. From an initialisation of $\alpha_V = \alpha_P = 1$ and $\beta_V = \beta_P = 0.2$, we plot the evolution of “residual-skip” ratios $\frac{\beta_V}{\alpha_V}, \frac{\beta_P}{\alpha_P}$ in Fig. 4. Weight decay was not applied on $\alpha_V, \beta_V, \alpha_P, \beta_P$. We see that the residual-skip weight ratios $\frac{\beta_V}{\alpha_V}, \frac{\beta_P}{\alpha_P}$ converge to 0 for the vast majority of layers, which indicates that these reparameterised matrices $W^V, W^P$ converge to the identity during training. As a result, the extra capacity to perform linear projections via $W^V, W^P$ is not used. We plot the corresponding trajectories for other scalar parameters like $\beta_{\text{PF}}$, in Figs. 17 to 20 which do not tend to 0. The model in Fig. 4 with trainable $W^V, W^P$ achieved worse final evaluation loss than the model in Fig. 3 with identity $W^V, W^P$ (1.194 vs. 1.178). Interestingly, this trend is reversed if the attention skip is re-added (Fig. 23). We thus elect to remove values and projection parameters $W^V, W^P$ in our skipless attention sub-blocks, by setting them to the identity. We refer to the resulting sub-block as the Simplified Attention Sub-block (SAS). Our full SAS block is depicted in Fig. 10 and we detail the mathematical computation in Eq. (12). We note that SAS blocks use only half of the parameters as well as half the matrix-multiplications in the attention sub-block: only query and key parameters remain. This results in a 13% reduction in the total number of parameters (146M vs 167M for 18 blocks) in the models we consider in this section. In Fig. 5 we see that when comparing speed in terms of wall-clock runtime on an A5000 GPU, our SAS block already trains at speeds (slightly) outperforming the default Pre-LN transformer. The corresponding plot comparing speed in terms of training steps taken is provided in Fig. 26. A more detailed analysis of efficiency gains in our simplified blocks can be found in Sec. 5. Though we do not have a rigorous proof for why the training dynamics in skipless transformers forgo additional capacity by converging to identity value and projection parameters (Fig. 4), nor why fixing such matrices to the identity results in no performance degradation and in fact trains faster than having trainable values and projections (Fig. 3), we offer some half-explanations. First, the fact that $W^V, W^P$ are simply linear projections of the input sequence representations $X$ (as opposed to in the MLP sub-block where elementwise non-linearities are places between such matrices), could --- 6The only exception is the first layer’s value parameters $W_1^V$, which is the only ratio above 0.05 in Fig. 4. We saw very minor performance gains by keeping $W_l^V$ (c.f. Fig. 24), so keep it whilst removing all other $W_l^V, W_l^P$ for $l \leq L$. In general, the % of all parameters removed by $W^V, W^P$ depends on the ratio of width $d$ to the MLP hidden width $d_{\text{FF}}$, vocabulary size and depth. Here, we have width $d = 768$, MLP hidden width $d_{\text{FF}} = 3072 = 4d$, vocabulary size $50K$ and depth $L = 18$. In the large depth limit, only the ratio $d_{\text{FF}}/d$ matters. mean that the additional capacity afforded by such matrices is not particularly substantial. This is corroborated by Trockman & Kolter (2023) who found in trained transformers the product \( W^V W^P \) often has a dominant identity component. Also, from a signal propagation perspective, there is no reason why initialising such matrices to be non-identity (e.g., orthogonal or Gaussian) would be preferred to identity initialisation, nor is it clear why they would be necessary in the first place, especially given the additional matrix-multiplication FLOPs they require. ### 4.3 Removing the MLP Sub-block Skip Connection So far we have simplified the Pre-LN transformer block by removing, without loss of training speed, three key components: 1) the attention sub-block skip connection, as well as 2) value and 3) projection matrices. We next turn to removing the remaining skip connection in the MLP sub-block. This proved more challenging. Like previous works (Martens et al., 2021; Zhang et al., 2022; He et al., 2023), we found that making activations more linear, motivated through signal propagation, still resulted in a significant loss of per-update training speed without MLP skips when using Adam, as shown in Fig. 25. We also experimented with variants of the Looks Linear initialisation (Balduzzi et al., 2017), with Gaussian, orthogonal or identity weights, to no avail. As such, we use standard activations (e.g., ReLU in this section) and initialisations in the MLP sub-block throughout our work. Instead, we turn to the idea of parallel MHA and MLP sub-blocks (Zhao et al., 2019; Wang & Komatsuzaki, 2021), which has proven popular in several recent large transformer models, such as PALM (Chowdhery et al., 2022) and ViT-22B (Dehghani et al., 2023). The parallel transformer block is depicted in Fig. 1 (bottom right), and mathematically, given input \( X_{in} \) it outputs \( X_{out} \), where: \[ X_{out} = \alpha_{comb} X_{in} + \beta_{FF} \text{MLP(Norm}(X_{in})) + \beta_{SA} \text{MHA(Norm}(X_{in})), \] (7) with skip gain \( \alpha_{comb} = 1 \), and residual gains \( \beta_{FF} = \beta_{SA} = 1 \) as default. In the parallel block, the MLP and MHA sub-blocks each take the same (normalised) input, affording more parallelisation compared to the standard Pre-LN block, which computes sub-blocks sequentially. The two sub-blocks are combined by summing their outputs, in conjunction with a single skip connection, with weight \( \alpha_{comb} \). This parallelisation, as well as the removal of one skip connection and one normalisation layer enables efficiency gains: Chowdhery et al. (2022) report the parallel block has 15% faster training speed compared to the standard “sequential” Pre-LN block. It is straightforward to combine our simplifications from Secs. 4.1 and 4.2 with the parallel block in Eq. (7): we simply 1) use our SAS attention sub-block, Eq. (12), 2) set fixed \( \alpha_{comb} = 0 \) to remove all skip connections in the block, and 3) downweight \( \beta_{FF} < 1 \). The resulting block is pictured in Fig. 11 and we refer to it as SAS-Parallel (SAS-P for short). We see in Fig. 5 that SAS-P trains even faster in terms of runtime compared to the SAS and Pre-LN blocks, and matches the training speed of the parallel block despite using 13% fewer parameters. Our intuition is that the combination of Shaped Attention and identity values/projections preserves signal between blocks throughout training and replaces the need for a skip connection in either sub-block. Moreover, we note that our attention sub-block is the identity function, \( X_{out} = X_{in} \), at initialisation, so there is no difference between our sequential SAS (Fig. 10) and parallel SAS-P (Fig. 11) blocks at initialisation. ### 4.4 Removing Normalisation Layers The final simplification we consider is removing normalisation layers, leaving us with our simplest block (Fig. 1, top right). From a signal propagation initialisation perspective, normalisation has been expendable at all stages of our simplifications in this section: the idea is that normalisation in Pre-LN blocks implicitly downweights residual branches, and this beneficial effect can be replicated without normalisation by another mechanism: either explicitly downweighting residual branches when skips are used, or biasing attention matrices to the identity/transforming MLP non-linearities to be “more” linear otherwise. As we account for these mechanisms in our modifications (downweighted MLP \( \beta_{FF} \) & Shaped Attention), from an initialisation perspective there is no need for normalisation. Of course, these modifications have effects on training speeds and stability beyond initialisation, which are harder to predict from existing theory alone. In Fig. 5 we see that removing normalisa- --- 8E.g., in single-head attention one can reparameterise \( W^V, W^P \) into one matrix with no expressivity loss. tion allows even our simplest transformer block, which does not have skips, sequential sub-blocks, values, projections or normalisations, to match the training speed of the Pre-LN block in terms of runtime. Having said that, we do observe a slight degradation in training speed per iteration, as seen in Fig. 6, suggesting that normalisation layers have some beneficial properties for training speed beyond what is captured by signal propagation theory. We thus treat our SAS (Fig. 10) and SAS-P (Fig. 11) blocks, with normalisation, as our main approaches. On this note, we point out that Dehghani et al. (2023) found extra normalisation on the queries and keys to provide improved training stability in ViT-22B, going against the recent trend of researchers seeking to remove normalisation. 5 FURTHER EXPERIMENTAL ANALYSIS Having introduced all of our simplifications in Sec. 4, we now provide further empirical analysis of our simplified blocks across a range of settings, as well as details of the efficiency gains afforded by our simplifications. In interest of space, additional experimental details can be found in App. D. Depth Scaling Given that signal propagation theory often focuses on large depths, where signal degeneracies usually appear, it is natural to ask whether the improved training speeds of our simplified transformer blocks also extend to larger depths. In Fig. 6, we see that scaling depth from 18 to 72 blocks leads to an increase in performance in our models as well as the Pre-LN transformer, indicating that our simplified models are able to not only train faster but also to utilise the extra capacity that more depth provides. Indeed, the per-update trajectories of our simplified blocks and Pre-LN are near-indistinguishable across depths, when using normalisation. On the other hand, we see that Value-SkipInit (He et al., 2023) actually trains slower per update at depth 72 compared to 18 despite the increase in capacity and parameter count. Moreover, the gap in performance between Value-SkipInit and the other models increases with larger depth, which implies poor scalability of the previous method. We note that 72 blocks is already reasonably deep by publically-available modern standards (Hoffmann et al., 2022; Touvron et al., 2023). BERT Next, we demonstrate our simplified blocks’ performance extends to different datasets and architectures besides autoregressive decoder-only, as well as on downstream tasks. We choose the popular setting of the bidirectional encoder-only BERT model (Devlin et al., 2018) for masked language modelling, with downstream GLUE benchmark. In particular, we adopt the “Crammed” BERT setup of Geiping & Goldstein (2023), which asks how well one can train a BERT model with a modest training budget: 24 hours on a single consumer GPU. The authors provide an architecture, data pipeline and training setup that has been optimised for this low resource setting. We note that the Crammed architecture uses the Pre-LN block, and describe other setup details in App. D. We plug-in our simplified blocks, keeping the existing optimised hyperparameters, besides tuning learning rate and weight decay. In Fig. 7, we see that our simplified blocks (especially with normalisation) match the pre-training speed on the masked language modelling task compared to the (Crammed) Pre-LN baseline within the 24 hour runtime. On the other hand, the removal of skip connections without modifying the values and projections (as in He et al. (2023)) once again leads to a significant loss of training speed. In Fig. 27, we provide the equivalent plot in terms of microbatch steps. Moreover in Table 1, we find that our methods match the performance of the Crammed BERT baseline after finetuning on the GLUE benchmark. We provide a breakdown over the downstream tasks in Table 2. We use the same finetuning protocol as Geiping & Goldstein (2023) (5 epochs, constant hyperparameters across tasks, dropout regularisation) for a fair comparison. Interestingly, Value-SkipInit is largely able to recover from its poor pre-training in the fine-tuning phase. This, combined with the need for dropout when fine-tuning, suggests that factors besides pre-training speed are also important for fine-tuning. As the focus of our work primarily concerns training speed from random initialisations, we leave this to future work. Relatedly, we found removing normalisations (Sec. 4.4) to cause instabilities when fine-tuning, where a small minority of sequences in some downstream datasets had NaN values in the initial forward pass from the pre-trained checkpoint. **Efficiency Gains** In Table 1, we also detail the parameter count and training speeds of models using different Transformers blocks on the masked language modelling task. We compute the speed as the ratio of the number of microbatch steps taken within the 24 hours of pre-training, relative to the baseline Pre-LN Crammed BERT. We see that our models use 16% fewer parameters, and SAS-P & SAS are 16% & 9% faster per iteration, respectively, compared to the Pre-LN block in our setting. We note that in our implementation the Parallel block is only 5% faster than the Pre-LN block, whereas Chowdhery et al. (2022) observed 15% faster training speeds, suggesting that further throughout increases may be possible with a more optimised implementation. Our implementation, like Geiping & Goldstein (2023), uses automated operator fusion in PyTorch (Sarofeen et al., 2022). **Longer training** Finally, given the current trends of training smaller models for longer on more data (Hoffmann et al., 2022; Tovvron et al., 2023), we investigate if our simplified blocks continue to match the training speeds of the Pre-LN block with longer training. To do this, we take our models from Fig. 5 on CodeParrot and train with $3 \times$ tokens. To be precise, we train for around 120K (rather than 40K) steps with batch size 128 and sequence length 128, which results in around 2B tokens. In Fig. 8, we do indeed see that our simplified SAS and SAS-P blocks continue to match or outperform the Pre-LN block in training speed when trained on more tokens. ### Table 1: GLUE benchmark & efficiency gains. | Block | GLUE | Params | Speed | |------------------------|------|--------|-------| | Pre-LN (Crammed) | 78.9±.7 | 120M | 1 | | Parallel | 78.5±.6 | 120M | 1.05 | | V-SkipInit | 78.0±.3 | 120M | 0.95 | | SAS (Sec. 4.2) | 78.4±.8 | 101M | 1.09 | | SAS-P (Sec. 4.3) | 78.3±.4 | 101M | 1.16 | | SAS-P, no norm | - | 101M | 1.20 | Figure 8: Training speeds continue to hold with longer training. ### 6 DISCUSSION **Limitations and future work** While we have demonstrated the efficacy of our simplifications across architectures, datasets, and tasks, the models we have considered (100-300M parameters) are small relative to the largest transformers. It would be interesting to investigate the performance of our simplified blocks at larger scales, especially because Chowdhery et al. (2022) report parallel blocks improve relative to Pre-LN blocks with scale. Our depth scaling experiments already show promise in this regard. On the theoretical side, though we were able to match the training speed of Pre-LN blocks with normalisation removed (Fig. 5), there are still unanswered questions regarding the benefits of normalisation for training speed and stability, and we were unable to remove normalisation with good downstream task performance. Moreover, while we tuned key hyperparameters like learning rate, it is possible that many default hyperparameters and choices we inherited, e.g. the AdamW optimiser, or fine-tuning protocol, are overfit to the default Pre-LN block, and an exhaustive hyperparameter search for our simplified blocks would yield further improvements. Finally, on the practical side, we believe that a more hardware-specific implementation of our simplified blocks could give further improvements to training speed and performance. **Conclusion** In this work, we asked whether it is possible to simplify the standard Transformer block by removing unnecessary components. Combining signal propagation theory and empirical insights, we have shown that it is possible to remove skip connections, sequential sub-blocks, value and projection parameters, without loss of training speed or downstream task performance. As a result, our models have around 15% fewer parameters and 16% increased throughput. We believe our work can lead to simpler architectures being used in practice, thereby helping to bridge the gap between theory and practice in deep learning, and reducing the cost of large transformer models. REPRODUCIBILITY STATEMENT Our code for experiments on auto-regressive transformers can be found at https://github.com/bobby-he/simplified_transformers. ACKNOWLEDGMENTS We would like to thank Sotiris Anagnostidis, Andrei Ivanov & Lorenzo Noci for helpful discussions in the initial stages of this project, and James Martens, John Martinis, Keivan Mohtashami, Tiago Pimentel & Imanol Schlag, as well as the anonymous reviewers, for constructive feedback on an early version of this manuscript. REFERENCES Ameen Ali, Tomer Galanti, and Lior Wolf. Centered self-attention layers. arXiv preprint arXiv:2306.01610, 2023. Devansh Arpit, Victor Campos, and Yoshua Bengio. How to initialize your network? robust initialization for weightnorm & resnets. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. Thomas Bachlechner, Bodhisattwa Prasad Majumder, Henry Mao, Gary Cottrell, and Julian McAuley. Rezero is all you need: Fast convergence at large depth. In Uncertainty in Artificial Intelligence, pp. 1352–1361. PMLR, 2021. David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma, and Brian McWilliams. The shattered gradients problem: If resnets are the answer, then what is the question? In International Conference on Machine Learning, pp. 342–350. PMLR, 2017. Andy Brock, Soham De, Samuel L Smith, and Karen Simonyan. High-performance large-scale image recognition without normalization. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 1059–1071. PMLR, 18–24 Jul 2021. Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, David Benjamin Belanger, Lucy J Colwell, and Adrian Weller. Rethinking attention with performers. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=Ua6zuk0WRH. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In International conference on machine learning, pp. 933–941. PMLR, 2017. Jared Q Davis, Albert Gu, Krzysztof Choromanski, Tri Dao, Christopher Re, Chelsea Finn, and Percy Liang. Catformer: Designing stable transformers via sensitivity analysis. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 2489–2499. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/davis21a.html. Soham De and Sam Smith. Batch normalization biases residual blocks towards the identity function in deep networks. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 19964–19975. Curran Associates, Inc., 2020.
pw2ssoOTpo
Can the authors give more analysis and justification on why they divide different domains based on color and cartoon/no cartoon? since there are a lot of other ways to categorize different domains, such as other styles besides cartoon.
CIFAR-10-Warehouse: Broad and More Realistic Testbeds in Model Generalization Analysis Xiaoxiao Sun\textsuperscript{1*}, Xingjian Leng\textsuperscript{1*}, Zijian Wang\textsuperscript{2*}, Yang Yang\textsuperscript{1*}, Zi Huang\textsuperscript{2}, Liang Zheng\textsuperscript{1} \textsuperscript{1} The Australian National University \textsuperscript{2} The University of Queensland \{first-name.last-name\}@anu.edu.au\textsuperscript{1} zijian.wang@uq.edu.au, huang@itee.uq.edu.au Abstract Analyzing model performance in various unseen environments is a critical research problem in the machine learning community. To study this problem, it is important to construct a testbed with out-of-distribution test sets that have broad coverage of environmental discrepancies. However, existing testbeds typically either have a small number of domains or are synthesized by image corruptions, hindering algorithm design that demonstrates real-world effectiveness. In this paper, we introduce CIFAR-10-Warehouse, consisting of 180 datasets collected by prompting image search engines and diffusion models in various ways. Generally sized between 300 and 8,000 images, the datasets contain natural images, cartoons, certain colors, or objects that do not naturally appear. With CIFAR-10-W, we aim to enhance the evaluation and deepen the understanding of two generalization tasks: domain generalization and model accuracy prediction in various out-of-distribution environments. We conduct extensive benchmarking and comparison experiments and show that CIFAR-10-W offers new and interesting insights inherent to these tasks. We also discuss other fields that would benefit from CIFAR-10-W. Data and code are available at \url{https://sites.google.com/view/CIFAR-10-warehouse/} 1 Introduction Analyzing and improving the generalization ability of deep learning models under out-of-distribution (OOD) environments has been of keen interest in the machine learning community. On various OOD test sets, accuracy prediction (AccP) \cite{deng2021investigating} investigates unsupervised risk proxies correlated with model accuracy, while domain generalization (DG) \cite{muandet2013domain,min2022domain,kim2023domain} aims to improve the average model accuracy when taking knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains. Their algorithm design and evaluation rely on datasets that have multiple OOD domains. In the community, such multi-domain datasets exist, but they have their respective limitations. For example, PACS and DomainNet \cite{peng2019momentum} commonly used in DG, contain 4 and 6 domains, respectively. While their images are from the real world, the small number of domains may limit the effectiveness and generalizability of the algorithms. In comparison, CIFAR-10-C \cite{hendrycks2019benchmarking} and ImageNet-C \cite{hendrycks2019benchmarking} have more domains, i.e., 50 and 75, respectively, but both are synthetic and have limited reflection on real-world scenarios. In the iWILDS-Cam dataset \cite{beery2021iwildscam}, there are 323 real-world domains captured by different cameras, but it was originally intended for animal counting; there are typically multiple objects in an image, and object categories in each domain are incomplete. As such, existing works \cite{miller2021measuring} usually merge the 323 domains into a few (e.g., 2) for label space completeness. To address the lack of appropriate multi-domain datasets, this paper introduces CIFAR-10-Warehouse, or CIFAR-10-W, a collection of 180 datasets, where each dataset has the same categories as CIFAR-10 and is viewed as a different domain. Specifically, 143 of them are real-world ones, collected by searching various image-sharing platforms with various text prompts, such as a cartoon deer. *Equal contribution. Table 1: Dataset comparison. We list key statistics of CIFAR-10-W and existing alternatives commonly used for accuracy prediction (AccP) and domain generalization (DG). CIFAR-10-W is advantageous in its larger number of real-world domains. The synthetic CIFAR-10 testbeds (e.g., CIFAR-10-C) may have infinitely many domains by varying corruption types and intensity. | Datasets | # domains | # test images | # classes | corrupted? | image size | description | |-------------------|-----------|---------------|-----------|------------|------------|-------------| | CIFAR-10.1 | 1 | 2,000 | 10 | No | 32 x 32 | AccP | | CIFAR-10.2 | 1 | 2,000 | 10 | No | 32 x 32 | AccP | | CIFAR-10-C | 50 | 500,000 | 10 | Yes | 32 x 32 | AccP | | CIFAR-10.1-C | 50 | 100,000 | 10 | Yes | 32 x 32 | AccP | | CIFAR-10.2-C | 50 | 100,000 | 10 | Yes | 32 x 32 | AccP | | CIFAR-10-C | 19 | 950,000 | 10 | Yes | 32 x 32 | AccP & DG | | CIFAR-10.1-C | 19 | 190,000 | 10 | Yes | 32 x 32 | AccP & DG | | ImageNet-C | 75 | 3,750,000 | 1000 | Yes | 224 x 224 | - | | Colored MNIST | 3 | 70,000 | 2 | No | 28 x 28 | | | Rotated MNIST | 6 | 70,000 | 10 | No | 28 x 28 | | | VLCS | 4 | 10,729 | 5 | No | 224 x 224 | | | OfficeHome | 4 | 15,588 | 65 | No | 224 x 224 | DG | | PACS | 4 | 9,991 | 7 | No | 224 x 224 | | | Terra Incognita | 4 | 24,788 | 10 | No | 224 x 224 | | | DomainNet | 6 | 586,575 | 345 | No | 224 x 224 | | | CIFAR-10-W | 180 | 608,691 | 10 | No | 224 x 224 | AutoEval & DG | or a yellow dog. The rest 37 are generated using stable diffusion (Rombach et al., 2022), using natural or unnatural prompts. CIFAR-10-W has a total of 608,691 images, and each domain typically has 300 to 8,000 images. In Table 1, we summarize CIFAR-10-W and several notable datasets that can be utilized to evaluate AccP and DG methods. It is important to highlight that datasets commonly used for AccP tasks typically consist of a single set, like CIFAR-10.1 (Recht et al., 2018) and CIFAR-10.2 (Lu et al., 2020), or multiple sets generated by corrupting one single dataset. In comparison, CIFAR-10-W has more domains with a broad distribution coverage, in which most are real-world, thus offering an ideal test bed for generalization studies. Domains in CIFAR-10-W have a broad distribution coverage of the 10 classes in the original CIFAR-10, such as colors, image styles and unnatural compositions. Moreover, most datasets in CIFAR-10-W are composed of real-world images; the rest are generated by stable diffusion. This allows us to study model generalization on a broad spectrum of distributions. Further, each domain in CIFAR-10-W covers images from all ten classes while keeping a moderate extent of the imbalance ratio of class distribution. The latter is useful when such data-centric research has not reached full maturity, and we can always create class absence from these data. We conducted benchmarking of popular accuracy prediction and domain generalization methods on CIFAR-10-W, resulting in interesting observations. Specifically, we found that domain generalization methods consistently improve performance over the baseline on near-OOD datasets, but their effectiveness decreases on far-OOD datasets. Furthermore, we discover obtaining accurate accuracy predictions becomes more challenging for sets that exhibit a significant domain gap with the classifier training set. Lastly, we discussed the potential benefits of CIFAR-10-W for other research fields. 2 Data Collection Diffusion model generated data. CIFAR-10-W includes 37 datasets generated by Stable-diffusion-2-1 (Rombach et al., 2022). Among them, 12 sets (CIFAR-10-W DF) are generated by using promopt ‘high quality photo of {color}{class name}’, where color is chosen from the 12 options shown in Fig. 1(A). Besides, we add ‘cartoon’ in the prompts, i.e., ‘high quality cartoon photo of {color}{class name}’, to generate another 12 sets (CIFAR-10-W DF.c). In addition, we use some special prompts in which the background, style and target objects do not naturally co-exist, to generate 13 sets (CIFAR-10-W DF:h). Details of these unnatural prompts are provided in the Appendix. Real-world searched data. CIFAR-10-W consists of 143 datasets that are collected through targeted keyword searches with specific conditions, such as color or style (cartoon). These searches were conducted across seven different search engines, including Google, Bing, Baidu, 360, Sogou, and stock photography/footage providing website, Pexels, as well as a photo/video sharing social website, Flickr. Fig. 1(A) illustrates the color options utilized for image searching. Among the search engines, Figure 1: **Colors, sources, statistics, and examples of CIFAR-10-W.** (A) Datasets of CIFAR-10-W are collected from 8 sources: 7 search engines (e.g., Google and Bing) and the diffusion model, where numbers after each source denote the number of datasets. We also depict color and style options used in prompting search and generation. (B) Distribution of the number of images for each category in datasets searched by keywords (KW), keywords plus cartoon (KWC) and diffusion under specific color conditions. (C) Sample images from different domains are shown. Baidu, 360, and Sogou offer the same set of 12 color options, which differ by one or two colors compared to those provided by Google and Bing. Flickr provides 15 color options, while Pexels offers 20 color options. Additionally, for each color in Google, Bing, Baidu, and 360, an additional search is conducted using the category name followed by the term ‘cartoon’ to retrieve a separate dataset (indicated by $\times 2$ in Fig. 1). Finally, there are 95 sets searched by keywords with color options (hereafter called CIFAR-10-W KW) and 48 sets searched by keywords with cartoon and color options (hereafter called CIFAR-10-W KWC). **Dataset statistics.** We carefully create the CIFAR-10-W dataset by manually removing noisy data from all sets, and the resulting numbers of images per dataset/category are shown in Fig. 1(B). For each set, the minimum number of images is 300, and the maximum 8,000, while most are between 1,000 and 6,000. There is also a moderate extent of class imbalance with the real-world datasets, with varying numbers of instances across different categories. In Fig. 1(C), we provide sample images from CIFAR-10-W. We can see that images of the same category but searched by different colors exhibit distinct content. Moreover, images of the same category searched using one of the color options also showcase notable variations (e.g., Flic.-red). In addition, using the same search keyword and color option results in different images on different platforms (e.g., Flic.-red vs. Goog.-red). **Privacy and license.** Some privacy information may be presented in CIFAR-10-W, such as license plate on the vehicle. To address potential privacy concerns, we manually blur the human faces and license plates. CIFAR-10-W inherits the licenses from its respective sources, provided that those licenses are explicitly stated. In cases where a license is not explicitly mentioned, CIFAR-10-W is distributed under license CC BY-NC 4.0[^1], which restricts its use for non-commercial purposes. ### Task I: Model Accuracy Prediction on Unlabeled Sets #### 3.1 Benchmarking Setup **Datasets.** We use CIFAR-10 dataset to train the classifiers to be evaluated and the 180 datasets in CIFAR-10-W as test sets, where all images are resized to the size of $32 \times 32$ (experiments of large-size $224 \times 224$ images are provided in the Appendix). Meanwhile, we use all the 188 corrupted sets from CIFAR-10-C, CIFAR-10.1-C, and CIFAR-10.2 as test sets for comparison. Because the [^1]: https://creativecommons.org/licenses/by-nc-sa/4.0/ 188 sets are all corrupted versions of CIFAR-10, CIFAR-10.1, and CIFAR-10.2, we name them collectively as CIFAR-10 corruption series or CIFAR-10-Cs hereafter. **Classifiers.** We conducted evaluations on a total of 73 classifiers (among them, 30 are evaluated in the Appendix), including ResNet, VGG, DenseNet, and others. **Methods to be evaluated.** We evaluate 8 existing accuracy prediction methods on CIFAR-10-Cs and CIFAR-10-W. These methods can be broadly categorized into two groups: **prediction score-based** and **feature-based**. Prediction score-based methods use the final layer output, i.e., Softmax to connect with classification accuracy. They include prediction score (Pred. score), entropy (Hendrycks & Gimpel [2017]), average thresholded confidence with maximum confidence score function (ATC-MC) (Garg et al. [2022]), difference of confidence (DoC) (Guillory et al. [2021]), and nuclear norm (Deng et al. [2023]). Feature-based methods use the output from the penultimate layer to design a dataset representation related to model accuracy on each test set, including Fréchet distance (FD) (Deng & Zheng [2021]) and Bag-of-Prototypes (BoP) (Tu et al. [2023]). Besides, we modify the model-centric method ‘agreement on the line’ (AoL) (Baek et al. [2022]) for accuracy prediction on multiple unseen sets for a given model, which is referred to as MS-AOL. **Metrics.** The mean absolute error (MAE) is used to measure error between the predicted and ground-truth accuracy. A lower MAE indicates a more precise AccP and vice versa. Spearman’s rank correlation $\rho$ (Spearman [1961]) is used to quantify the correlation strength between different accuracy indicators and ground-truth accuracy. When using MAE as a metric, the leave-one-out evaluation strategy is implemented. This involves training a linear regressor using 179 sets and then using the trained regressor to predict the accuracy of the remaining single test set. MAE (%) of each accuracy prediction method is calculated on the 180 test sets. Please note that error bars are not relevant in the context of this task. This is attributed to the specific nature of the AccP setting, where a fixed classifier is subjected to evaluation. Consequently, the AccP methods do not have variables, culminating in a variance of zero in their results of five repeated runs. ### 3.2 Benchmarking Results and Main Observations In Table 2, we compare the performance of eight accuracy prediction methods on both existing datasets CIFAR-10-Cs and the newly collected CIFAR-10-W. The benchmarking results provide valuable insights and allow us to analyze the performance of these methods. **CIFAR-10-W offers a more challenging testbed for AccP methods compared to commonly used synthetic test sets.** As indicated in Table 2, the baseline methods typically exhibit higher MAE on CIFAR-10-W compared to commonly used synthetic datasets. Specifically, when using different methods to predict accuracy on various test sets using the ResNet44 classifier, the average MAE values for all methods are 3.62% on CIFAR-10-Cs, while being much higher on CIFAR-10-W, i.e., 6.98%, 5.26%, 9.14%, and 6.65% for test sets generated by diffusion, searched by keywords (KW), searched by keywords plus cartoon (KWC), and all 180 test sets. We observe a consistent trend when evaluating other classifiers, where AccP methods exhibit lower accuracy on the more real-world and diverse CIFAR-10-W dataset suite compared to the corrupted sets. Specifically, Fig 2(right) shows the correlation between accuracy and MS-AoL accuracy prediction error for CIFAR-10-Cs and CIFAR-10-W datasets, respectively. The broader data spread on the y-axis of CIFAR-10-W suggests it presents more challenges. All these results suggest that the complexity and diversity of CIFAR-10-W pose additional challenges for AccP methods, making them less accurate in predicting model performance on such diverse and real-world datasets compared to the corrupted sets. **Superior accuracy prediction methods are more consistently reflected on CIFAR-10-W.** When evaluating on CIFAR-10-Cs (188 sets, third column in Table 2), the best-performing methods are Pred.S (0.9), BoP, and BoP for ResNet44, RepVGG-A0, and ShuffleNetV2 classifiers, respectively. However, when testing on CIFAR-10-W (180 sets, last column), MS-AOL performs the best among the three classifiers. Meanwhile, when conducting a more comprehensive evaluation with 40 classifiers, MS-AOL remains the best on both CIFAR-10-Cs and CIFAR-10-W, which is consistent with the results of single classifiers on CIFAR-10-W. Also, the performance of other methods such as BoP and nuclear norm is also stable on CIFAR-10-W. The more dissimilar the distributions of the classifier training set and the unseen test sets are, the more challenging it becomes for good accuracy prediction results. AccP methods tend to Table 2: Evaluation of AccP methods on CIFAR-10 Cs and CIFAR-10-W. $C_{Cs}^{10}$ and $C_{W}^{10}$ denote CIFAR-10 Cs and CIFAR-10-W, resp. We use MAE (%) to indicate estimation precision. Besides individually reporting accuracy prediction results for three classifiers, we also provide average results of (40 classifiers). For each classifier or “avg multiple classifiers (40)”, the best and second best methods for each domain category are highlighted in blue and bold, respectively. | Classifier | Method | $C_{Cs}^{10}$ | $C_{W}^{10}$ - diffusion (37) | $C_{W}^{10}$ - KW (95) | $C_{W}^{10}$ - KWC (48) | |------------|--------|--------------|-----------------------------|----------------------|-----------------------| | | All | DF Eh Df c | DF | Goog. Bing. Baid. | 360 Sogo. Flic. Pexe. | | ResNet44d | Pred. s (0.7) | 3.46 | 19.61 | 6.63 | 6.40 | 7.60 | | | Pred. s (0.8) | 3.37 | 9.38 | 6.36 | 5.47 | 7.30 | | | Pred. s (0.9) | 3.06 | 9.07 | 4.94 | 4.64 | 4.35 | | | Entropy | 3.14 | 9.35 | 6.51 | 5.55 | 7.19 | | | ATC-MC | 3.10 | 9.06 | 5.81 | 5.01 | 6.69 | | | DoC | 3.13 | 9.34 | 6.19 | 5.43 | 7.05 | | | Nul.norm | 3.92 | 4.22 | 3.33 | 2.85 | 3.57 | | | FD | 5.76 | 6.08 | 5.98 | 14.43 | 8.75 | | | BoP+JS | 3.41 | 4.34 | 2.81 | 4.61 | 4.52 | | | MS-AoL | 3.80 | 8.35 | 11.66 | 6.80 | 8.92 | | | Avg | 3.62 | 7.88 | 6.10 | 6.88 | 6.98 | | | Pred. s (0.7) | 5.92 | 8.44 | 6.24 | 6.53 | 7.11 | | | Pred. s (0.8) | 4.81 | 7.87 | 6.58 | 6.16 | 6.89 | | | Pred. s (0.9) | 3.90 | 7.13 | 6.62 | 5.84 | 6.55 | | | Entropy | 5.81 | 7.73 | 6.45 | 6.62 | 6.98 | | | ATC-MC | 3.80 | 7.05 | 5.69 | 5.38 | 6.38 | | | DoC | 5.23 | 7.95 | 6.30 | 6.33 | 8.89 | | | Nul.norm | 4.03 | 4.37 | 3.34 | 7.66 | 5.10 | | | FD | 5.53 | 7.33 | 3.23 | 12.08 | 7.54 | | | BoP | 3.25 | 4.61 | 2.06 | 8.45 | 5.03 | | | MS-AoL | 3.86 | 8.55 | 6.72 | 2.93 | 6.14 | | | Avg | 4.61 | 7.11 | 5.43 | 6.83 | 6.47 | | | Pred. s (0.7) | 3.48 | 9.65 | 6.02 | 6.22 | 7.36 | | | Pred. s (0.8) | 3.26 | 9.13 | 5.71 | 5.89 | 6.97 | | | Pred. s (0.9) | 3.08 | 8.70 | 5.82 | 5.51 | 6.73 | | | Entropy | 3.21 | 9.40 | 5.93 | 6.19 | 7.23 | | | ATC-MC | 3.07 | 8.66 | 5.82 | 5.76 | 6.80 | | | DoC | 3.20 | 9.25 | 5.86 | 5.91 | 7.07 | | | Nul.norm | 3.58 | 5.88 | 4.28 | 8.19 | 5.32 | | | FD | 4.52 | 5.30 | 7.25 | 15.62 | 9.28 | | | BoP | 2.90 | 5.40 | 5.92 | 12.06 | 7.73 | | | MS-AoL | 3.00 | 8.98 | 10.22 | 5.19 | 8.15 | | | Avg | 3.33 | 7.83 | 6.28 | 7.65 | 7.27 | | | Pred. s (0.7) | 4.10 | 19.18 | 7.31 | 7.36 | 7.99 | | | Pred. s (0.8) | 3.83 | 9.25 | 7.23 | 6.95 | 7.85 | | | Pred. s (0.9) | 3.59 | 9.23 | 7.06 | 6.68 | 7.70 | | | Entropy | 3.97 | 9.21 | 7.17 | 7.19 | 7.89 | | | ATC-MC | 3.74 | 9.10 | 7.21 | 6.81 | 7.74 | | | DoC | 3.81 | 9.27 | 7.22 | 6.96 | 7.86 | | | Nul.norm | 3.58 | 4.32 | 3.15 | 7.60 | 5.00 | | | FD | 6.04 | 6.04 | 3.83 | 12.55 | 7.43 | | | BoP | 3.63 | 5.94 | 4.57 | 12.67 | 7.68 | | | MS-AoL | 3.32 | 8.09 | 8.35 | 4.49 | 7.17 | | | Avg | 3.96 | 7.96 | 6.31 | 7.98 | 7.43 | face more difficulties in obtaining accurate estimations on the KWC subsets compared to KW, where cartoon images in KWC are much more differently looking to the training set, CIFAR-10, of the classifier. On KWC subsets using the ResNet-44 classifier, MAE ranges between 5.37% to 11.50%, while MAE is about 2.37% - 6.30% on the KW subsets and is lowest in the CIFAR-10-Cs, indicating that the CIFAR-10-Cs test set most closely resembles CIFAR-10 and are easier for AccP. Similar observations can be obtained on other classifiers. These results are further visualized in Fig. 2, where KWC subsets exhibit larger domain gaps (i.e., higher FD) with CIFAR-10 compared to other test sets. ### 3.3 More Results and Findings Compared with CIFAR-10-Cs, prediction-score methods prediction on CIFAR-10-W generally has larger variance for different classifiers. In Fig. 3(A), we observe that variance in MAE increases significantly from ~1% to ~6%, especially for prediction-score methods. In comparison, variance for nuclear norm, FD, BoP and MS-AoL remains at a similar level. One possible explanation... Figure 2: **Correlation studies.** (A) Visualizing correlation between accuracy and prediction scores on CIFAR-10-W. We use ResNet44 classifier and Spearman’s rank correlation $\rho$. ATC-MC (left) and MA-AoL (right) are used. (B) Relationship between accuracy and accuracy prediction error (MAE, %) on the CIFAR-10-Cs (left) and CIFAR-10-W (right) testbeds. Both use the MS-AoL method. Figure 3: (A) Variance of MAE (%) caused by 40 classifiers. We compare the variance of different AccP methods on CIFAR-10-Cs and CIFAR-10-W. (B) Impact of the classifier training set: CIFAR-10 vs. CIFAR-10-F. (C) Impact of test category removal: the removed categories, deleted one or two at a time, are listed at the bottom. A positive change in MAE indicates worse performance and vice versa. is that the averaging of simple scores for individual samples introduces noise when dealing with challenging test sets. In contrast, the nuclear norm considers the entire prediction matrix, making it more resilient against noise. This finding emphasizes the significance of enhancing the robustness of AccP methods for different classifiers, particularly on challenging test sets such as CIFAR-10-W. **Impact of classifier training set.** In Fig. [3](B), when tested on CIFAR-10-Cs, using CIFAR-10 as training set gives very small MAE, suggesting that CIFAR-10 is too similar to CIFAR-10-Cs. On the other hand, when tested on CIFAR-10-W, the CIFAR-10-F (Sun et al., 2021) training set gives a lower error than CIFAR-10. This is probably because CIFAR-10-F is collected from Flickr, making it relatively more similar to CIFAR-10-W, but the overall error is much higher than testing on CIFAR-10-Cs. Interestingly, the trend of nuclear norm, FD and BoP is somehow very different from other methods. Understanding this phenomenon requires future endeavors. In all, this experiment suggests if the classifier training set is different from the test sets, AccP performance would deteriorate. **Impact of missing test classes on accuracy prediction.** In Fig. [3](C), we remove one or two classes at a time from CIFAR-10-W and report MAE changes in accuracy prediction methods. We observe mixed results. If we remove confusing classes such as *deer* and *horse*, prediction-score methods have better performance. When removing single classes such as *cat*, *dog*, *frog* and *ship*, these prediction-score methods remain stable. In comparison, nuclear norm is sensitive to class removal, because the latter causes spurious responses to the prediction matrix. Apparently, the robustness of AccP methods against missing classes needs further study. **Impact of test set size.** In Fig. [4](A), when we decrease the number of test images in CIFAR-10-W and CIFAR-10-Cs, the performance of accuracy prediction drops consistently for all methods. This Figure 4: (A) Test set size on AccP methods and (B) Average and standard deviation of MAE values for each model of the 40 classifiers across 13 AccP methods. In (A), the test size is gradually reduced to 100 instances from the full dataset and the performance of methods is shown on both CIFAR-10-Cs and CIFAR-10-W. In (B), the easiest and hardest models to evaluate are indicated by green and red points, respectively. The ResNet44 classifier trained on CIFAR-10 is used. is consistent with previous findings (Deng & Zheng, 2021). Besides, we observe the magnitude of performance drop is more significant on CIFAR-10-W, again demonstrating its challenging nature. **Evaluation difficulty of different classifiers.** In Fig. 4(B), 40 classifiers are evaluated using 13 methods, and the average MAE values are displayed for each model. It can be observed that certain classifiers are more challenging to evaluate (such as PNASNetA42 and RrexNeXt29) compared to others. Additionally, the average MAE for the 40 classifiers is higher on CIFAR-10-W (2% to 12%) than on CIFAR-10-Cs (2% to 6%), which aligns with our previous observations from Table 2. ## 4 Task II: Domain Generalization ### 4.1 Benchmarking Setup **Datasets.** We use two settings: single-source DG and multi-source DG. For multi-source DG, we collected four datasets in addition to CIFAR-10-W. Two are searched from the Yandex search engine, using keywords (KW) and keywords plus cartoon (KWC), respectively. The rest two are generated using the diffusion model with the same prompts as the image search. For single-source DG, we use one of the four sets as the training set (diffusion model generated set is used in the paper). Results of the other three sets can be found in the Appendix. We train models using different DG methods on 1-4 sets/domains, respectively. A quarter of each source set is allocated for validation during training. **Methods to be evaluated.** We conduct evaluations on both single-source DG and multi-source DG using different methods: Empirical Risk Minimization (ERM) (Gulrajani & Lopez-Paz, 2020), Style-Agnostic Networks (SagNet) (Nam et al., 2021), Self-supervised Contrastive Regularization (SelfReg) (Kim et al., 2021), Spectral Decoupling (SD) (Pezeshki et al., 2021), Fishr (Rame et al., 2022), Empirical Quantile Risk Minimization (EQRM) (Eastwood et al., 2022), Relative Chi-Square (RCS) (Chen et al., 2023), CORrelation ALignment (CORAL) (Sun & Saenko, 2016), Group-DRO (Sagawa et al., 2019), VREx (Krueger et al., 2021) and VNE (Kim et al., 2023). All these methods use the same model ResNet18 (He et al., 2016), to ensure fair comparisons and consistency. **Evaluation Metrics.** We employ classifiers trained on the source to make predictions on CIFAR-10-W. We report the average top-1 classification accuracy (%) on 180 test sets for each DG method. ### 4.2 Benchmarking Results and Discussions CIFAR-10-W offers a more comprehensive DG evaluation environment by providing testing domains with a wide variety of domain discrepancies. In Table 3 and Fig. 5, it is apparent that classification accuracy on the target domain spans a wide range, from approximately 40% to 99%, where the majority of test set accuracy is between 60% and 90%. Generally, cartoon datasets tend to be more challenging, while images generated by the diffusion model are comparatively easier to recognize. This observation can be attributed to the fact that cartoons can significantly differ from... Table 3: Benchmarking different domain generalization methods on CIFAR-10-W. We report Top-1 classification accuracy (%). The mean and standard deviation computed over three runs are reported in the last column. All other notations are the same as in Table 2. | Setup | Method | $C_{10}^{\omega}$ - diffusion (37) | $C_{10}^{\omega}$ - KW (95) | $C_{10}^{\omega}$ - KWC (48) | $C_{10}^{\omega}$ | |-------|--------|-----------------------------------|--------------------------|---------------------------|----------------| | | DFb | DFc | DF | Goo Bin Bai 360 Sog Fli Prex | Goo Bin Bai 360 All | | Single (1) DG | ERM | 79.79 77.00 | 95.39 | 77.89 85.47 69.66 72.14 82.78 86.37 90.85 | 48.72 48.01 39.78 41.39 72.27 ± 2.88 | | | SelfReg | 79.71 77.01 | 93.11 | 76.67 68.00 72.51 81.79 85.64 90.57 48.48 46.37 38.57 40.47 71.36 ± 3.20 | | | SD | 79.75 78.34 | 94.54 | 77.96 84.91 69.21 72.76 82.65 86.16 90.99 | 49.21 48.28 39.79 41.85 72.35 ± 3.69 | | | RSC | 80.94 77.49 | 93.50 | 78.33 84.73 69.78 74.43 83.86 87.53 91.56 | 49.21 47.35 40.17 41.71 72.70 ± 4.28 | | Multi-(2)-Source DG | ERM | 85.24 89.92 | 93.71 | 76.82 84.07 68.47 71.33 80.75 85.08 89.29 | 63.61 58.35 50.64 54.39 75.97 ± 1.56 | | | SagNet | 86.30 91.35 | 95.05 | 79.90 86.31 71.78 74.60 83.84 86.94 91.23 | 66.69 60.85 52.33 57.71 78.37 ± 1.49 | | | SelfReg | 86.42 89.64 | 93.45 | 78.20 84.84 70.20 73.48 82.59 87.08 90.91 | 65.69 60.53 52.28 57.56 75.50 ± 0.48 | | | SD | 86.55 90.27 | 94.18 | 80.59 85.73 71.82 75.54 83.99 88.12 91.59 | 66.91 61.48 52.48 57.24 78.57 ± 0.97 | | | Fishr | 85.79 90.06 | 93.45 | 79.07 86.58 69.67 72.71 81.86 86.66 90.28 | 63.92 59.61 50.67 54.27 76.98 ± 1.09 | | | EQRM | 86.07 91.04 | 94.68 | 78.98 86.56 71.22 74.04 83.24 86.59 90.63 | 67.07 61.32 53.16 57.99 78.12 ± 1.24 | | | CORAL | 82.24 86.15 | 90.94 | 71.13 78.33 64.24 66.81 76.46 80.89 85.76 | 57.32 52.79 47.03 50.27 71.68 ± 1.76 | | | VREx | 85.21 90.89 | 94.08 | 78.76 86.07 70.15 73.09 82.81 86.30 90.32 | 65.80 60.23 51.10 55.90 77.26 ± 0.54 | | | GroupDRO| 85.11 89.49 | 93.93 | 79.41 86.11 71.02 75.27 83.21 87.20 90.63 | 64.65 59.42 51.17 56.02 77.46 ± 1.71 | | Multi-(3)-Source DG | ERM | 86.69 93.90 | 98.71 | 88.18 94.51 79.78 85.10 91.73 93.08 99.59 | 70.60 64.74 55.87 58.49 83.46 ± 0.84 | | | SagNet | 86.78 94.74 | 98.86 | 88.57 93.86 79.91 84.19 90.88 92.15 95.41 | 71.15 65.16 56.14 59.51 83.41 ± 0.82 | | | SelfReg | 87.58 95.15 | 98.87 | 89.19 94.75 80.86 86.14 92.42 93.26 96.06 | 72.88 66.60 57.53 61.26 84.48 ± 0.48 | | | SD | 88.08 95.07 | 98.96 | 90.43 95.58 81.70 86.69 92.75 93.96 96.70 | 73.56 66.87 58.41 61.76 85.05 ± 0.51 | | | Fishr | 87.38 94.95 | 98.99 | 89.28 94.36 80.12 84.28 93.19 96.12 71.84 66.09 56.76 60.85 84.00 ± 0.73 | | | EQRM | 86.64 94.31 | 98.87 | 89.38 94.53 80.21 83.81 89.79 93.55 94.49 | 71.98 59.47 54.27 59.27 82.87 ± 0.97 | | | CORAL | 83.98 93.76 | 98.84 | 86.07 92.87 77.46 76.05 88.86 90.23 94.11 | 67.62 62.34 54.03 57.15 79.60 ± 0.60 | | | VREx | 86.92 95.12 | 98.83 | 95.41 98.40 81.00 86.34 92.63 93.51 96.58 | 72.78 66.19 57.71 61.27 84.55 ± 0.92 | | | GroupDRO| 87.73 94.75 | 98.83 | 88.74 94.34 81.06 85.59 92.08 93.10 96.06 | 72.26 66.65 57.74 61.23 84.32 ± 0.68 | | Multi-(4)-Source DG | ERM | 87.49 95.89 | 98.77 | 89.34 94.49 83.23 88.04 92.96 93.45 96.32 | 83.88 76.70 67.53 74.78 87.85 ± 0.52 | | | SagNet | 87.47 96.50 | 98.89 | 89.88 95.08 83.38 88.35 93.46 93.60 96.56 | 84.98 76.87 68.25 75.37 88.30 ± 0.39 | | | SelfReg | 88.14 96.83 | 98.89 | 90.46 95.21 83.99 88.38 92.95 93.57 96.54 | 85.04 77.48 69.39 76.18 88.48 ± 0.76 | | | SD | 88.27 96.54 | 98.81 | 95.59 94.19 89.99 94.34 96.56 95.80 78.13 | 69.93 76.47 69.40 ± 0.49 | | | Fishr | 87.75 96.20 | 98.94 | 89.46 94.88 84.53 87.97 93.09 93.57 96.01 | 83.01 72.27 67.97 78.91 ± 0.57 | | | EQRM | 87.56 96.02 | 98.90 | 89.47 94.54 82.97 87.51 92.29 93.13 96.26 | 84.10 76.52 67.27 74.22 79.70 ± 0.76 | | | CORAL | 86.31 95.04 | 98.48 | 87.04 93.17 80.73 84.37 90.34 91.43 94.86 | 80.84 74.20 65.10 71.32 85.77 ± 1.29 | | | VREx | 87.98 96.32 | 98.81 | 90.49 95.36 84.28 89.06 93.78 94.17 96.95 | 85.14 77.14 69.20 75.42 88.64 ± 0.54 | | | GroupDRO| 87.60 95.97 | 98.76 | 89.15 94.42 82.67 87.47 92.65 93.20 96.58 | 84.06 76.39 67.79 74.62 87.75 ± 0.41 | | | VNE | 87.06 95.79 | 98.64 | 88.43 93.88 82.01 86.89 91.91 92.48 95.61 | 83.87 75.88 67.24 73.75 87.17 ± 0.61 | the source domain, making it more difficult for models to generalize. On the other hand, generated images often exhibit high qualities, featuring large and distinct objects. DG improvement on different domains. In Fig. [5](A), we observe mixed results regarding the improvement brought by the SD method compared to the baseline accuracy on the 180 target sets. When using four sources, most domains show improvement. However, when using only one source, improvement is primarily seen in real domains (KW), while diffusion-generated and cartoon domains do not exhibit significant improvement. Previous works have indicated limited improvement of DG methods on datasets like PACS and DomainNet, potentially due to the small number of test domains. While CIFAR-10-W may not cover all possible target domains, including more diverse domains in the evaluation can provide valuable insights into the performance and limitations of DG methods. Predicting accuracy of models after domain generalization. Considering DG techniques only update the classification model based on the source domain(s), it is possible to predict the DG model accuracy on unseen target sets using methods described in Section 3. In Fig. [5](B), we use the FD and nuclear norm methods to predict the accuracy of ResNet18 models trained or domain generalized under the single-source DG setup. Interestingly, we observe that both accuracy prediction methods exhibit similar performance, regardless of whether DG is applied. This finding suggests that AccP techniques may be applicable and effective even for domain generalized models. 5 OTHER FIELDS THAT POTENTIALLY BENEFIT FROM CIFAR-10-W Learning from noisy data. Most existing datasets (Wei et al., 2021; Song et al., 2019) in this area are manually created e.g., labels are flipped between classes (Ghosh & Lan, 2021). There exist a few Figure 5: (A) Impact of increasing the number of source domains on domain generalization. The ResNet-18 classifier is trained using the domain generalization technique SD with our searched training sets as the source domains. The density plot on the y-axis illustrates the density of the test set at various levels of improvement. On the x-axis, the density plot shows the distribution of accuracies achieved by the baseline method ERM on CIFAR-10-W datasets. (B) Effectiveness of accuracy prediction methods (nuclear norm and FD) on CIFAR-10-W. We evaluate the performance using the ResNet-18 model trained with two different approaches: the normally trained model (top) and the model trained with the domain generalization technique SD (bottom). real-world noisy datasets (Xu et al., 2018; Lee et al., 2018; Li et al., 2017b), but they are not cleaned, meaning that there is no ground truth to evaluate noise label spotting algorithms. In this regard, CIFAR-10-W offers a valuable real-world noisy dataset, because we have recorded the incorrectly labeled images during the cleaning and annotation process. Test time and unsupervised domain adaptation (DA). Datasets in CIFAR-10-W generally have a few thousand images each, which might not be sufficient for full training. Nevertheless, it is possible to use them for unsupervised DA, where target domains with a few hundred unlabeled images target domain are often used (Csurka, 2017; Luo et al., 2023; Long et al., 2018; Wang et al., 2020). Moreover, it would be even easier to use CIFAR-10-W for test-time DA, because the latter usually assumes the use of a batch of images for online training. The broad coverage of distributions makes CIFAR-10-W an ideal testbed for evaluating and benchmarking DA algorithms. Out-of-distribution (OOD) detection. In OOD detection, the in-distribution (ID) test data typically have the same distribution as the ID training data. In (Hendrycks & Dietterich, 2019; Mintun et al., 2021; Lu et al., 2020), in-distribution test data contain a few domains, which require the OOD detection algorithm to be generalizable to various ID distributions. In this regard, CIFAR-10-W will significantly expand the boundary of the ID domain. 6 CONCLUSION This paper introduces CIFAR-10-Warehouse, a collection of 180 datasets with broad distribution coverage of the 10 categories in original CIFAR-10. Most of these datasets are real-world domains searched from various image search engines, and the rest are generated by stable diffusion. Diversity of the domains is reflected in rich color spectrum, styles, (un)naturalness and class imbalance. On CIFAR-10-W, we benchmark popular methods in accuracy prediction and domain generalization. We confirm that CIFAR-10-W creates challenging setups for the two tasks where interesting insights are observed. We also discuss other fields where this dataset can be used and believe it will contribute to model generalization analysis under more real-world and a large number of test domains. ACKNOWLEDGEMENT We thank Aditi Raghunathan for valuable discussions on early CIFAR-10-W prototypes, and thank the anonymous reviewers and AC for their insightful comments and suggestions that improved this paper. This research was funded in part by the ARC Discovery Project (DP210102801) to Liang Zheng, and ARC DP230101196, DP240101814, CE200100025 to Zi Huang. REFERENCES Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv preprint arXiv:1907.02893*, 2019. Christina Baek, Yiding Jiang, Aditi Raghunathan, and J Zico Kolter. Agreement-on-the-line: Predicting the performance of neural networks under distribution shift. *Advances in Neural Information Processing Systems*, 35:19274–19289, 2022. Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In *Proceedings of the European conference on computer vision (ECCV)*, pp. 456–473, 2018. Sara Beery, Arushi Agarwal, Elijah Cole, and Vighnesh Birodkar. The iwildcam 2021 competition dataset. *arXiv preprint arXiv:2105.03494*, 2021. Sentao Chen, Lei Wang, Zijie Hong, and Xiaowei Yang. Domain generalization by joint-product distribution alignment. *Pattern Recognition*, 134:109086, 2023. Gabriela Csurka. A comprehensive survey on domain adaptation for visual applications. *Domain adaptation in computer vision applications*, pp. 1–35, 2017. Weijian Deng and Liang Zheng. Are labels always necessary for classifier accuracy evaluation? In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 15069–15078, 2021. Weijian Deng, Yumin Suh, Stephen Gould, and Liang Zheng. Confidence and dispersity speak: Characterising prediction matrix for unsupervised accuracy estimation. In *ICML*, 2023. Cian Eastwood, Alexander Robey, Shashank Singh, Julius Von Kügelgen, Hamed Hassani, George J Pappas, and Bernhard Schölkopf. Probable domain generalization via quantile risk minimization. *arXiv preprint arXiv:2207.09944*, 2022. Chen Fang, Ye Xu, and Daniel N Rockmore. Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias. In *Proceedings of the IEEE International Conference on Computer Vision*, pp. 1657–1664, 2013. Saurabh Garg, Sivaraman Balakrishnan, Zachary Chase Lipton, Behnam Neyshabur, and Hanie Sedghi. Leveraging unlabeled data to predict out-of-distribution performance. In *International Conference on Learning Representations*, 2022. Muhammad Ghifary, W Bastiaan Kleijn, Mengjie Zhang, and David Balduzzi. Domain generalization for object recognition with multi-task autoencoders. In *Proceedings of the IEEE international conference on computer vision*, pp. 2551–2559, 2015. Aritra Ghosh and Andrew Lan. Do we really need gold samples for sample weighting under label noise? In *Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision*, pp. 3922–3931, 2021. Devin Guillory, Vaishaal Shankar, Sayna Ebrahimi, Trevor Darrell, and Ludwig Schmidt. Predicting with confidence on unseen distributions. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 1134–1144, 2021. Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. *arXiv preprint arXiv:2007.01434*, 2020.
U9NHClvopO
I am not convinced with the result that, Superpos PT without dropout outperforms full finetuning by such a large margin of 16.4% on CB task on T5 small, same goes with T5 base on COPA task. It is not convincing that PEFT method is able to beat full finetuning by such a large margin. As claimed by the authors even LoRA and adapters that perform better than soft prompt based methods, occasionally beat full finetuning by very small margin. Could you please shed some light on it? Also, are the hyperparameters used to tune full model parameters consistent with the original paper.
SUPERPOS-PROMPT: ENHANCING SOFT PROMPT TUNING OF LANGUAGE MODELS WITH SUPERPOSITION OF MULTI TOKEN EMBEDDINGS Anonymous authors Paper under double-blind review ABSTRACT Soft prompt tuning techniques have recently gained traction as an effective strategy for the parameter-efficient tuning of pretrained language models, particularly minimizing the required adjustment of model parameters. Despite their growing use, achieving optimal tuning with soft prompts, especially with smaller datasets, remains a substantial challenge. This study makes two contributions in this domain: (i) we introduce SUPERPOS-PROMPT, a new reparameterization technique employing the superposition of multiple pretrained vocabulary embeddings to improve the learning of soft prompts. Our experiments across several GLUE and SuperGLUE benchmarks consistently highlight SUPERPOS-PROMPT’s superiority over Residual Prompt tuning, exhibiting an average score increase of +6.4 in T5-Small and +5.0 in T5-Base along with a faster convergence. Remarkably, SUPERPOS-PROMPT occasionally outperforms even full fine-tuning methods. (ii) Additionally, we demonstrate enhanced performance and rapid convergence by omitting dropout from the frozen network, yielding consistent improvements across various scenarios and tuning methods. Unlike many existing strategies, our approach does not rely on the availability of a proficient pretrained source prompt for initialization, thereby ensuring notable flexibility and more effective combination of related prompt candidates. 1 INTRODUCTION Optimizing deep neural network models generally requires substantial data to achieve optimal performance. This prerequisite has underscored the importance of transfer learning in various domains of deep learning, including natural language processing (NLP) (Ruder et al., 2019), computer vision (Gopalakrishnan et al., 2017), and reinforcement learning (Zhu et al., 2023). Transfer learning is an approach in which a pre-trained model is adapted and fine-tuned for new tasks, particularly when labeled data is limited. Foundation models, denoted as Large Language Models (LLMs) in NLP, are large models trained on vast datasets utilizing self-supervised methodologies (Pfeiffer et al., 2023) acting as a base for further fine-tuning on new tasks. Over time, the scale of publicly available LLMs has remarkably grown, from BERT’s 340 million parameters (Devlin et al., 2019) to contemporary models housing up to 100 billion parameters (Almazrouei et al., 2023). Full fine-tuning of models is one approach to overcoming the challenges posed by limited data at the cost of extensive memory. Parameter-Efficient Transfer Learning (Guo et al., 2021) also known as Parameter-Efficient Fine-tuning (PEFT) (Chen et al., 2023) or Delta-Tuning (Ding et al., 2023), offers a solution to this problem. PEFT involves training a minimal subset of parameters, either selected from existing ones or newly added (Liafin et al., 2023). This technique notably reduces memory and storage needs, as only the modified parameters need to be tuned during training and stored post-training. Various mechanisms are employed in PEFT: (i) Adapter: One prominent PEFT technique is ‘Adapter’ training (Houlsby et al., 2019), involving the integration of a bottleneck feed-forward network at each transformer block. (ii) LoRA: Another PEFT method, LoRA (Hu et al., 2022), is developed to identify a low-rank delta within specific parameter matrices. (iii) Soft Prompt Tuning (Lester et al., 2021) is a further PEFT technique that concatenates a trainable matrix to the input embeddings. The columns of this trainable matrix are referred to as soft prompts. Although not the leading technique in terms of performance among other PEFT techniques, soft prompt tuning... is renowned for its exceptional parameter efficiency. **Soft Prompt Tuning** is also the central focus of this paper. Different strategies are proposed for an efficient soft prompt tuning: (i) **Prompt layers reparameterization:** *Residual Prompt Tuning* (Razdaibiedina et al., 2023) is an example of reparameterization of prompt layers employing residual reparameterization to stabilize the prompt tuning process. It uses a randomly initialized autoencoder connected with a residual link. (ii) **Pre-trained prompts as initial states:** another strategy involves using pre-trained prompts as initial states for new prompts. An example is Soft Prompt Transfer (SPoT) (Vu et al., 2022), which trains a prompt on one or more source tasks and then utilizes it to initialize the prompt for a target task. The selection of appropriate source tasks is crucial in this approach, and a retrieval algorithm is employed to identify similar tasks in a semantic task space. (iii) **Combined approach:** approaches like Intrinsic Prompt Tuning (IPT) (Qin et al., 2021), ATTEMPT (Asai et al., 2022), PANDA (Zhong et al., 2022), or MPT (Wang et al., 2023) combine usage of both reparameterization and pre-trained soft prompts. IPT decomposes the pre-trained soft prompts of diverse NLP tasks into a shared low-dimensional subspace by training an autoencoder. Subsequently, the decoder part of the autoencoder is utilized to facilitate learning new prompts in reduced dimensions. ATTEMPT trains an attention layer to combine the right pre-trained prompts using softmax. PANDA uses a knowledge distillation technique to transfer the “knowledge” from the source prompt to the target prompt. MPT trains a single transferable prompt by distilling knowledge from multiple task-specific source prompts. The training of soft prompts presents notable challenges as highlighted in several studies (Qin et al., 2021; Li & Liang, 2021); particularly, (i) fine-tuning soft prompts is optimization-intensive, particularly with limited data and smaller model sizes in T5 family between 50 to 300 million parameters (Lester et al., 2021); (ii) although typically trainable, soft prompts converge considerably slower compared to full fine-tuning and other delta-tuning methods (Ding et al., 2022). These issues constitute the primary focus of our work. The contributions of our work can be summarized in two folds: (i) we propose **SUPERPOS-PROMPT**, an innovative reparameterization technique that formulates prompts as superpositions on multiple token embeddings. These token embeddings are sampled vectors from the embedding layer of the language model. This approach enables enhanced stability in prompt tuning using diverse information emanating from multiple token embeddings. This strategy facilitates the learning of a new task representation utilizing a combination of multiple task embeddings. We show that **SUPERPOS-PROMPT** approach almost consistently outperforms existing relevant soft prompt tuning approaches in 13 Glue and SuperGlue benchmarking tasks. (ii) Our research indicates that omitting dropout (Srivastava et al., 2014) from the original network can yield more efficient and expedited convergence in prompt tuning. To the best of our knowledge, this observation has not been addressed in prior studies. ## 2 BACKGROUND **Full Fine-tuning** involves starting with pre-trained weights and then adjusting all of these weights based on the training data of the new tasks. For example, if we have a new classification dataset $T$ and the weights of our model, written as $\theta$, we aim to maximize the log likelihood using pre-trained weights as our starting point. $$\max_{\theta} \sum_{x,y \in T} \log P_\theta(y | X)$$ **Parameter-Efficient Fine-tuning** involves adding new weights or tune only subset of original weights without changing the other parameters $\theta$. If we denote $\theta'$ as our new parameters it means: $$\max_{\theta'} \sum_{x,y \in T} \log P_{\theta}(y | X; \theta')$$ **Prompt tuning** is a type of Parameter-Efficient Fine-tuning (PEFT) method where new weights are added only to the model’s input by concatenation, without altering $\theta$. In simpler terms, it implies that we search only in the parameter space $P$ to optimize our model: $$\max_P \sum_{x,y \in T} \log P_\theta(y \mid [P|X])$$ To explain further, if we have a sequence of $l$ tokens, like $\{x_1, x_2, ..., x_l\}$, the model first turns the tokens into a matrix $X \in \mathbb{R}^{e \times l}$, where $l$ is the number of input tokens and $e$ is the dimension of the embedding space. The goal is to find the best soft prompts for our task. These soft prompts are written as $P \in \mathbb{R}^{e \times n}$, where $n$ is the number of the soft prompts. The model then takes the joined matrix $[P|X] \in \mathbb{R}^{e \times (n+l)}$ as input (Lester et al., 2021). This is illustrated in Figure 1.(a). 3 APPROACH Our objective is to enhance the model’s ability to learn and refine soft prompts effectively utilizing multiple token embeddings. This technique is grounded in the observation that initiating the prompt with token representations is generally more beneficial compared to beginning with random vectors (Lester et al., 2021). However, a question arises: how can we employ more than one token embedding for each prompt embedding? We address this issue by adopting a superposition—a weighted sum of several chosen tokens for each prompt embedding, as illustrated in Figure 1.(b). **SuperPos-Prompt:** We start by randomly selecting $m$ unique token embeddings from the token embedding layer, denoted as $e_1, e_2, ..., e_m$. These are organized as columns of the matrix $E \in \mathbb{R}^{e \times m}$. To compute each prompt token $p_i$, this matrix is multiplied by a vector $p'_i \in \mathbb{R}^m$. During our tuning process, both the matrix $E$ and each $p'_i$ are jointly optimized. $$\forall i \in \{1, 2, \ldots, n\} \quad p_i = Ep'_i = \begin{bmatrix} e_1 & e_2 & \cdots & e_m \end{bmatrix} \begin{bmatrix} p'_i \end{bmatrix} = \sum_{j=1}^{m} p'_{ij} e_j$$ During our experiments, we noticed a problem where the inclusion of weight decay in the optimizer led to a reduction in the norm of $E$, resulting in significant information loss in this layer. To combat this, we reparameterize the matrix $E$ as the sum of two matrices: $E_{freeze}$ and $\Delta E$. In this arrangement, only $\Delta E$ is adjusted while $E_{freeze}$ remains constant. This strategy effectively counters the negative impact of weight decay on the original embeddings, allowing the model to learn a $\Delta E$ with a lower norm and thus minimally altering the embeddings. For initialization, the matrix $\Delta E$ is set as a zero matrix. $$E = E_{freeze} + \Delta E \quad \Delta E_{init} = 0_{e \times m}$$ In our experiments, we employed identical initial token embeddings for each prompt while permitting each to adapt uniquely, yielding independent $\Delta E_i$ for every prompt. The final formula to compute each prompt $p_i$ is delineated below and the illustration is provided in Figure 1.(f) $$p_i = (E_{freeze} + \Delta E_i)p'_i$$ COMPARISON TO SIMILAR PROMPT TUNING APPROACHES **Intrinsic Prompt Tuning (IPT)** (Qin et al., 2021) involves training an autoencoder during the Multi-task Subspace Finding phase. Post this phase, the decoder part of the autoencoder is employed in the training of new prompts, a stage referred to as Intrinsic Subspace Tuning (Figure 1.(d)). In contrast, our approach, SUPERPOS-PROMPT, sidesteps this complexity. We construct the decoder layer by utilizing token embeddings selected directly from the embedding layer. This step negates the need for pre-trained soft prompts and the associated training of an autoencoder, as illustrated in Figure 1.(e). **ATTEMPT** (Asai et al., 2022) also has similarities with our method, but it relies on pretrained source prompts instead of token embeddings, and employs softmax weighting instead of superposition. Through our experiments, we noticed that utilizing superposition is more efficient than softmax weighting as we showed in §A.2. **Residual Prompt Tuning:** Our approach shares similarities with Residual Prompt Tuning (Razdabiedina et al., 2023), as both employ reparameterization to achieve improved and more rapid Figure 1: Overview of different prompt tuning methods: (a.) Simple Prompt Tuning: This method adjusts the prompt embeddings, $P$, which are then concatenated with the input embeddings. (b.) SuperPos-Prompt Tuning: Employs a mixture of embeddings as a weighted sum, $e_j; 1 \leq j \leq m$, based on their weight in $p_i'$. All $e_j$s and vector $p_i'$ are co-tuned. (c.) Residual Prompt Tuning: Utilizes an autoencoder with residual connection reparametrization. (d.) Intrinsic Subspace Tuning: Employs a pre-trained decoder to map lower-dimension prompts to the model’s dimension. (e.) SuperPos-Prompt can also be interpreted as a linear up-projection initialized with sampled embeddings. (f.) SuperPos-Prompt full calculation consist of an addition to prevent weight-decay negative effects and matrix multiplication to calculate superposition of embeddings. Convergence, avoiding the use of pretrained soft prompts. However, Residual Prompt Tuning utilizes an encoder-decoder model with a residual connection and is tuned end-to-end, as shown in Figure 1.(c). In contrast, our model is simpler, having only half the components to tune. It consists only of an up-projection layer, and by using pretrained token embeddings to initialize the decoder’s weights, it offers a more advantageous starting point. We evaluate our method against vanilla prompt tuning (Lester et al., 2021), residual prompt tuning (Razdaibiedina et al., 2023), and ATTEMPT (Asai et al., 2022). We intentionally excluded IPT (Qin et al., 2021) from our comparison. The exclusion is due to IPT’s requirement for 100 pre-trained source prompts to train an auto-encoder. Since they utilize BART (Lewis et al., 2020) as their backbone model, their autoencoder was incompatible with our framework. Training a new auto-encoder was not feasible as we lacked access to the necessary 100 pre-trained source prompts. 4 EXPERIMENTS 4.1 DATASET In previous studies, smaller datasets have presented substantial challenges for prompt tuning techniques (Ding et al., 2022). To effectively contrast various methods, we have selected several tasks/datasets from both GLUE (Wang et al., 2019b) and SuperGLUE (Wang et al., 2019a), comprising both small and large datasets. The datasets employed in our study are the Quora Question Pairs (QQP) (DataCanary et al., 2017), Question NLI (QNLI), MultiNLI (MNLI) (Williams et al., 2018), The Stanford Sentiment Treebank (SST-2) (Socher et al., 2013), Semantic Textual Similarity Benchmark (STS-B) (Cer et al., 2017), Microsoft Research Paraphrase Corpus (MRPC) (Dolan & Brockett, 2005), The Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019), Multi-Sentence Reading Comprehension (MultiRC) (Khashabi et al., 2018), Recognizing Textual Entailment (RTE), CommitmentBank (CB), Choice Of Plausible Alternatives (COPA) (Gordon et al., 2012), Words in Context (WiC) (Pilehvar & Camacho-Collados, 2019), and BoolQ (Clark et al., 2019). 4.2 BASE LANGUAGE MODEL In this study, we employ the T5 model family for conducting experiments (Raffel et al., 2020). Our approach to the classification task involves conditional generation, wherein the output comprises a string of tokens, each symbolizing a class label. This study exclusively modifies the encoder segment of the T5 model by integrating soft prompts. Given the constraints of computational resources, our analysis is confined to the small and base model sizes. Specifically, we deploy two LM-adapted versions of T5v1.1, namely t5-small-lm-adapt and t5-base-lm-adapt (Lester et al., 2021). Previous research, including studies such as the Residual Prompt and ATTEMPT, have highlighted concerns regarding the stability and tuning difficulties of T5v1.1-LM adapt when used as a backbone for prompt tuning tasks (Razdabiedina et al., 2023; Asai et al., 2022). These studies eventually switched to the original T5 checkpoint. However, utilizing the pretrained T5 original checkpoint raises concerns. Since this checkpoint is already trained on the GLUE and SuperGLUE datasets, the model does not need to learn a new task, only requiring the appropriate prompt to utilize previously acquired knowledge (Raffel et al., 2020). This situation may produce misleading results, obscuring the true performance and meaningfulness of the ultimate comparison. Therefore we implemented and tested their methods using the provided hyperparameters on T5v1.1-LM adapt. 4.3 ABLATION STUDY In SuperPos prompt tuning, a key hyperparameter is the number of tokens sampled for superposition, denoted as $m$. Figure 2.(C) shows the impact of different $m$ values on the performance of SUPERPOS-PROMPT across various tasks. On the x-axis, we display the number of tokens ($m$), and the y-axis shows the highest performance score achieved. We observe that an increase in the number of sampled tokens generally leads to better results, but improvements tend to level off after reaching 128 tokens. Based on this finding, we set the number of sampled tokens in our method to 128. 4.4 EXPERIMENT SETUP For our experiments, the following configurations were employed: **All of Prompt Tuning Methods:** We appended 10 prompt tokens to the input. Each method was tested under two conditions: with and without dropout, running for a total of 80 epochs. No learning rate scheduler was used, and the AdamW optimizer (Loshchilov & Hutter, 2019) was employed. **Simple Prompt Tuning:** Prompts were initialized by sampling 10 unique token embeddings from the embedding layer, using a learning rate of 0.01 and a weight decay of 0.01. **Residual Prompt Tuning:** Prompts were initialized by sampling 10 unique token embeddings from the embedding layer, with a learning rate of 0.3 and a weight decay of 0.01, as specified in the original paper (Razdabiedina et al., 2023); we set the bottleneck size to 128 to be comparable to our method. Table 1: Results on some tasks from GLUE and SuperGLUE dataset set with 10-token prompts and training for 80 epochs. For tasks with two metrics, the average score is reported. Numbers marked with † means that T5 model doesn’t converge to always generate valid labels. So the score will be zero. The full fine-tuning are reported as a comparsion baseline. | Task→Method↓ | Dropout | GLUE | SuperGLUE | Avg. | |--------------|---------|------|-----------|------| | | | QQP | ONLI | MNLI | SST-2 | STS-B | MRPC | CoLA | MultiRC | RTE | CB | COPA | WiC | BoolQ | | Simple PT | ✓ | 58.2/65.5 | 50.6 | 33.2 | 79.4 | 9.8/7.9 | 81.2/68.4 | 0.0 | 17.3/3.0 | 52.3 | 0.0/0.0 | 0.0 | 50.6 | 62.2 | 37.1 | | Simple PT | ✗ | 70.8/75.3 | 72.8 | 50.7 | 84.9 | 0.0/0.0 | 82.5/71.3 | 0.0 | 22.6/6.0 | 49.1 | 0.0/0.0 | 0.0 | 57.4 | 62.6 | 41.5 | | ATTEMPT | ✓ | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | ATTEMPT | ✗ | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | Residual PT | ✓ | 70.6/74.9 | 61.8 | 34.6 | 82.8 | 69.7/72.4 | 81.9/71.1 | 0.5 | 59.9/0.8 | 52.7 | 49.6/71.4 | 56.0 | 52.4 | 62.3 | 54.9 | | Residual PT | ✗ | 73.3/78.2 | 79.2 | 60.7 | 85.1 | 80.8/80.6 | 88.3/83.3 | 20.6 | 59.8/4.4 | 59.6 | 68.6/73.2 | 56.0 | 58.2 | 64.7 | 63.8 | | SuperPos PT | ✓ | 74.4/79.9 | 82.9 | 66.7 | 88.8 | 82.9/82.8 | 88.4/82.6 | 23.4 | 59.9/0.8 | 58.5 | 39.6/60.7 | 56.0 | 58.6 | 62.4 | 63.3 | | SuperPos PT | ✗ | 79.1/83.3 | 85.3 | 71.7 | 89.8 | 84.0/84.0 | 89.9/85.8 | 38.9 | 66.6/16.7 | 64.6 | 73.6/76.8 | 58.0 | 65.7 | 68.9 | 70.2 | | Full Fine-tuning | ✓ | 87.4/90.5 | 89.5 | 82.9 | 92.1 | 85.8/85.5 | 89.6/84.8 | 42.0 | 68.5/19.3 | 66.1 | 47.9/69.6 | 57.0 | 66.5 | 71.1 | 71.7 | ATTEMPT (Asai et al., 2022): \( P_{\text{target}} \) prompts were initialized by sampling ten unique token embeddings from the embedding layer. To avoid leakage between training and testing data, we excluded QQP, QNLI, MNLI, SST-2 datasets from the evaluation, as these task pretrained prompts were used during the training of new prompts. To align with the hyperparameters from the original ATTEMPT paper, the learning rate is set to 0.3, with a weight decay of 0.00001, and a bottleneck size of \( G \) set to 100. SuperPos Prompt Tuning: Prompts in superposition were initialized with 128 unique token embeddings, shared across all 10 prompt tokens. The learning rate was 0.01 with a weight decay of 0.00001. Full Fine-tuning: We opted for a lower learning rate of 0.00001 to preserve the original weights more effectively. 5 RESULTS Our experimental results are compiled in Table 1. Runs generating invalid labels, a possible consequence of conditional generation, are denoted with † and scored as 0. Standard metrics from the GLUE and SuperGLUE benchmarks are used for each task. Impact of Dropout: As shown in Figure 2.(a) and Table 1, eliminating dropout from the frozen model enhanced not only the performance of the model but also accelerated convergence. This trend was also evident in experiments with Residual Prompt, ATTEMPT, and SUPERPOS-PROMPT tuning methods. We hypothesize that dropout, being a form of regularization to prevent overfitting, may excessively constrain prompt tuning. Since tuning only 10 prompts inherently limits flexibility, additional dropout may lead to underperformance. SuperPos-Prompt Performance: According to Table 1, SUPERPOS-PROMPT excelled over Residual Prompt tuning, showing a significant average score increase of +6.4 in T5v1.1-Small and +5 in T5v1.1-Base. Our method has superior performance on most tasks that ATTEMPT were tested on. In some cases, it even surpassed full fine-tuning methods. A more detailed comparison of some selected tasks learning curves, based on T5v1.1 Base LM-Adapted experiments, is available in Figure 2.(b). Among the compared methods, SUPERPOS-PROMPT generally achieved better performance. Figure 2: This figure illustrates results from our experiment using ‘T5v1.1 Base LM-Adapted’ as the foundation. (a) Learning curves comparing dropout effects on SuperPos-Prompt for selected tasks. (b) Learning curves comparing various prompt tuning methods across selected tasks, conducted without dropout. (c) Ablation study on the effect of sampled token count ($m$) for SuperPos-Prompt, with the x-axis representing sample token count and the y-axis indicating peak performance for the relevant metric. (d) Analysis of cosine similarity in superposition weights for each prompt token across all tasks. Table 2: Mean and standard deviation of standardized overall scoring across thirteen different tasks. This table facilitates a comparison of method stability, where a lower standard deviation indicates higher stability across tasks. Note: ATTEMPT results are excluded as it was not evaluated on four tasks from thirteen tasks. | Method | Dropout | T5v1.1 Small LM-Adapted | T5v1.1 Base LM-Adapted | |-----------------|---------|-------------------------|------------------------| | Simple PT | ✓ | 17.1±26.4 | 17.2±25.2 | | Simple PT | ✗ | 28.9±29.5 | 30.8±32.6 | | Residual PT | ✓ | 44.7±31.3 | 49.5±32.8 | | Residual PT | ✗ | 65.9±20.0 | 83.2±10.2 | | SuperPos PT | ✓ | 66.9±17.8 | 75.9±18.5 | | SuperPos PT | ✗ | 81.7±9.7 | 93.6±4.7 | | Full Fine-tuning| ✓ | 85.2±9.0 | 97.4±5.7 | and faster convergence. All learning curves are without dropout variant of that methods as most of the time this variant reached their best performances, as detailed in Table 1. Other Prompt Tuning Methods Performances: The performance of Residual Prompt and ATTEMPT did not meet the levels reported in their respective papers. This discrepancy may stem from their use of T5 checkpoints trained specifically on these tasks. Unable to replicate their results, we tested our method using identical checkpoint and found it surpassed their reported numbers. For more details, see §A.1. Stability Analysis: To compare the stability of various methods, we normalized and scaled the performance of each task across these methods. This process, referred to as “standardized overall scoring”, is described by Yu et al. (2023) and is employed in evaluating Large Language Models (LLMs). To determine stability, we calculated the mean and standard deviation of these scores for each method over thirteen tasks. A method demonstrating a lower standard deviation suggests greater stability, indicating consistent performance across various tasks. As shown in Table 2, our method has a standard deviation half that of the RESIDUAL PROMPT, thus exhibiting superior stability in prompt tuning tasks, closely rivaling stability of full fine-tuning. Analysis on Learned SuperPos-Prompt: We performed a cosine similarity analysis on the learned superposition weights ($p_i'$) for each prompt across different tasks. The resulting similarity matrices are presented in Figure 2.(d). Each prompt’s token similarity matrix reveals distinct patterns, suggesting unique task-specific encodings. However, we found no clear correlation between these patterns and the task descriptions. Notably, tasks with limited data and fewer training steps, such as CB, COPA, and RTE, tend to have the most distinctive prompts. 6 CONCLUSIONS In this work, we made two primary contributions that enhance the field of prompt tuning for language models, especially when fine-tuning datasets are small and existing soft prompt tuning approaches fall short. First, we observed a notable improvement in the efficiency and speed of convergence in prompt tuning upon excluding dropout from the frozen network. This observation, which has not been explored in existing literature, holds consistently across most scenarios, enhancing the performance of RESIDUAL PROMPT, ATTEMPT, and SUPERPOS-PROMPT tuning methods. Our findings underscore the importance of continually reassessing established network parameters and practices to unearth potential enhancements. Our second key contribution was the introduction of SUPERPOS-PROMPT, a novel reparameterization technique for soft prompt tuning. This method, leveraging the superpositions of sampled pretrained token embeddings, enhances stability in prompt tuning and obviates the need for pretrained source prompts. SUPERPOS-PROMPT consistently outperformed Residual Prompt tuning, showcasing an average score increase of +6.4 in T5-Small and +5.0 in T5-Base across all thirteen GLUE and SuperGLUE benchmarks used in this study. Remarkably, SUPERPOS-PROMPT not only exceeded the performance of Residual Prompt tuning but also, in certain instances, showed superior performance to the full fine-tuning approach. Additionally, we observed a clear correlation between the number of sampled tokens on SUPERPOS-PROMPT and performance scores, with an optimal plateau at 128 tokens. Looking forward, the exploration of integrating pre-trained source prompts stands as a promising avenue for further enhancing model performances. We anticipate that our work will spur innovative and more efficient uses of pre-trained source prompts in the future, reinforcing the importance of this research in the ever-evolving field of language model tuning and optimization. Future work includes a more extensive comparison of SUPERPOS-PROMPT with a broader range of prompting techniques in different dataset scenarios, an endeavor constrained in this study by computational resource limitations. Additionally, while this study exclusively explored language models, we anticipate the extension of this approach to additional foundation models across various modalities, as well as multimodal foundation models. REFERENCES Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Maitha Alhammadi, Mazzotta Daniele, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Nouné, Baptiste Pannier, and Guilherme Penedo. The falcon series of language models: Towards open frontier models. 2023. Akari Asai, Mohammadreza Salehi, Matthew Peters, and Hannaneh Hajishirzi. ATTEMPT: Parameter-efficient multi-task tuning via attentional mixtures of soft prompts. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 6655–6672, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.446. URL https://aclanthology.org/2022.emnlp-main.446 Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Steven Bethard, Marine Carpuat, Marianna Apidianaki, Saif M. Mohammad, Daniel Cer, and David Jurgens (eds.), Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 1–14, Vancouver, Canada, August 2017. Association for Computational Linguistics. doi: 10.18653/v1/S17-2001. URL https://aclanthology.org/S17-2001 Jiaao Chen, Aston Zhang, Xingjian Shi, Mu Li, Alex Smola, and Diyi Yang. Parameter-efficient fine-tuning design spaces. arXiv preprint arXiv:2301.01821, 2023. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2924–2936, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1300. URL https://aclanthology.org/N19-1300 DataCanary, hilfalkaff, Lili Jiang, Meg Risdal, Nikhil Dandekar, and tomtung. Quora question pairs, 2017. URL https://kaggle.com/competitions/quora-question-pairs Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423 Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. arXiv preprint arXiv:2203.06904, 2022. Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. Parameter-efficient fine-tuning of large-scale pre-trained language models. *Nature Machine Intelligence*, 5(3):220–235, 2023. William B. Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In *Proceedings of the Third International Workshop on Paraphrasing (IWP2005)*, 2005. URL https://aclanthology.org/I05-5002 Kasthurirangan Gopalakrishnan, Siddhartha K Khaitan, Alok Choudhary, and Ankit Agrawal. Deep convolutional neural networks with transfer learning for computer vision-based data-driven pavement distress detection. *Construction and building materials*, 157:322–330, 2017. Andrew Gordon, Zornitsa Kozareva, and Melissa Roemmele. SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)*, pp. 394–398, Montréal, Canada, 7-8 June 2012. Association for Computational Linguistics. URL https://aclanthology.org/S12-1052 Demi Guo, Alexander Rush, and Yoon Kim. Parameter-efficient transfer learning with diff pruning. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, pp. 4884–4896, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.378. URL https://aclanthology.org/2021.acl-long.378 Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*, pp. 2790–2799. PMLR, 2019. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=nZeVKeFyf9 Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. Looking beyond the surface: a challenge set for reading comprehension over multiple sentences. In *Proceedings of North American Chapter of the Association for Computational Linguistics (NAACL)*, 2018. Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pp. 3045–3059, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.243. URL https://aclanthology.org/2021.emnlp-main.243 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pp. 7871–7880, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.703. URL https://aclanthology.org/2020.acl-main.703 Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, pp. 4582–4597, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.353. URL https://aclanthology.org/2021.acl-long.353 Vladislav Lialin, Vijeta Deshpande, and Anna Rumshisky. Scaling down to scale up: A guide to parameter-efficient fine-tuning. *arXiv preprint arXiv:2303.15647*, 2023.
BSePKWwTUj
I fail to understand how is the new regret formulation different from the previous regret formulation with indicator? Because if the indicator function is false then that would inherently increase the regret of the objective for different $i$.
MULTIOBJECTIVE STOCHASTIC LINEAR BANDITS UNDER LEXICOGRAPHIC ORDERING Anonymous authors Paper under double-blind review ABSTRACT This paper studies the multiobjective stochastic linear bandit (MOSLB) model under lexicographic ordering, where the agent aims to simultaneously maximize \( m \) objectives in a hierarchical manner. This model has various real-world scenarios, including water resource planning and radiation treatment for cancer patients. However, there is no effort on the general MOSLB model except a special case called multiobjective multi-armed bandits. Previous literature provided a suboptimal algorithm for this special case, which enjoys a regret bound of \( \tilde{O}(T^{2/3}) \) under a priority-based regret measure. In this paper, we propose an algorithm achieving the almost optimal regret bound \( \tilde{O}(d\sqrt{T}) \) for the MOSLB model, and its metric is the general regret. Here, \( d \) is the dimension of arm vector and \( T \) is the time horizon. The major novelties of our algorithm include a new arm filter and a multiple trade-off approach for exploration and exploitation. Experiments confirm the merits of our algorithms and provide compelling evidence to support our analysis. 1 INTRODUCTION Sequential decision-making under uncertainty arises in numerous real-world scenarios, such as medical trials [Robbins, 1952], recommendation systems [Bubeck & Cesa-Bianchi, 2012], and autonomous driving [Huang et al., 2019]. This has motivated the development of the stochastic multi-armed bandit (MAB) model, where the agent repeatedly selects an arm from \( K \) arms and receives a single-valued reward sampled from a fixed but unknown distribution specific to the selected arm [Agrawal, 1995; Li et al., 2010a; Xue et al., 2020; Ghosh & Sankararaman, 2022]. The agent aims to minimize the regret, which is the cumulative difference between the expected reward of the selected arm and that of the best arm. Furthermore, the aforementioned scenarios can be better modeled if multiple objectives are considered. An example is an online advertising system where the agent not only needs to maximize the click-through rate but also the click-conversion rate [Rodriguez et al., 2012]. Therefore, a natural extension of MAB is replacing the single-valued reward with a vector, known as multiobjective multi-armed bandits (MOMAB) [Drugan & Nowe, 2013]. A general framework of MOMAB is a \( T \)-round sequential decision-making system [Drugan & Nowe, 2013], where the agent chooses an arm \( a_t \) from the given arm set \( \{1, 2, \ldots, K\} \) at the \( t \)-th round and receives a reward vector \( [y^1(a_t), y^2(a_t), \ldots, y^m(a_t)] \in \mathbb{R}^m \) whose \( i \)-th element is a random variable with expectation \( \mathbb{E}[y^i(a_t)] = \mu^i(a_t), i \in \{1, 2, \ldots, m\} \). Most of the existing work evaluates the performance of the agent by Pareto regret [Van Moffaert et al., 2014; Turgay et al., 2018; Lu et al., 2019], which regards all objectives as equivalent and minimizing the regret of any objective can guarantee a sublinear Pareto regret bound [Xu & Klabjan, 2023, Theorem 4.1]. Therefore, if the evaluation criterion is Pareto regret, the agent can select any of the \( m \) objectives to optimize and ignore other objectives, which is unreasonable. To deal with this inherent drawback, the lexicographic order is adopted to distinguish the importance among different objectives [Ehrgott, 2005]. In this setting, the priority over \( m \) objectives is given by indices, such that the \( i \)-th objective has a higher priority than the \( j \)-th objective if \( i < j \). For the bandit model, given two arms \( a \) and \( a' \) with expected rewards \( \mu(a) = [\mu^1(a), \mu^2(a), \ldots, \mu^m(a)] \) and \( \mu(a') = [\mu^1(a'), \mu^2(a'), \ldots, \mu^m(a')] \), arm \( a \) is said to lexically dominate arm \( a' \), denoted by \( a >_{lex} a' \), if and only if \( \mu^1(a) > \mu^1(a') \) or there exists some \( i^* \in \{2, \ldots, m\} \), such that \( \mu^i(a) = \mu^i(a') \) for \( 1 \leq i \leq i^* - 1 \) and \( \mu^{i^*}(a) > \mu^{i^*}(a') \). An arm \( a_* \) is said to be lexicographic optimal if and only if any other arm does not lexicographically dominate it. Hüyük & Tekin (2021) was the first to explore the MOMAB model under lexicographic ordering and proposed a priority-based regret, $$\hat{R}^i(T) = \sum_{t=1}^{T} (\mu^i(a_*) - \mu^i(a_t)) \mathbb{I}(\mu^j(a_*) = \mu^j(a_t), 1 \leq j < i)$$ where $a_*$ denotes the lexicographic optimal arm and $\mathbb{I}(\cdot)$ is the indicator function. Utilizing this regret, Hüyük & Tekin (2021) developed an algorithm with a regret bound $\tilde{O}((KT)^{2/3})$, which is sub-optimal since the optimal regret bound for existing single objective MAB algorithms is $O(K \log T)$ (Lai & Robbins, 1985). On the other hand, the MOMAB model neglects the contextual information in real-world applications, such as user preferences and news features in news recommendation systems, which could be employed to guide the decision-making process (Li et al., 2010b). To incorporate contextual information into the decision-making process, a natural approach is to utilize the stochastic linear bandit (SLB) model. The SLB model has been widely researched in the single objective bandit field (Auer, 2002; Dani et al., 2008; Chu et al., 2011; Abbasi-yadkori et al., 2011; Alieva et al., 2021; Zhu & Mineiro, 2022; He et al., 2022; Yang et al., 2022), and here we extend it to multiobjective setting by formalizing the multiobjective stochastic linear bandit (MOSLB) model. In MOSLB, the agent selects an arm $x_t$ from the given arm set $D_t \subset \mathbb{R}^d$ at the $t$-th round and then receives a stochastic reward vector $[y^1_t, y^2_t, \ldots, y^m_t] \in \mathbb{R}^m$ satisfying $$E[y^i_t | x_t, F_{t-1}] = \langle \theta^*_i, x_t \rangle, i = 1, 2, \ldots, m$$ where $y^i_t$ represents the reward of the $i$-th objective, $\theta^*_i$ denotes the unknown parameters for the $i$-th objective, and $F_{t-1} = \{x_1, x_2, \ldots, x_{t-1}\} \cup \{y^1_1, y^2_1, \ldots, y^m_1\} \cup \ldots \cup \{y^1_{t-1}, y^2_{t-1}, \ldots, y^m_{t-1}\}$ constitutes a $\sigma$-filtration of events up to $t$. Meanwhile, a common assumption on the bandit problem is that the stochastic rewards are sub-Gaussian with a fixed parameter $R \geq 0$, that is, for any $\beta \in \mathbb{R}$, $$E[e^{\beta y^i_t} | x_t, F_{t-1}] \leq \exp\left(\frac{\beta^2 R^2}{2}\right), i = 1, 2, \ldots, m.$$ To evaluate the performance of the agent, we adopt the general regret for single objective SLB (Auer, 2002), such that $$R^i(T) = \sum_{t=1}^{T} \langle \theta^*_i, x^*_t - x_t \rangle, i = 1, 2, \ldots, m$$ where $x^*_t$ indicates the lexicographic optimal arm in $D_t$. Clearly, $R^i(T)$ is more stringent than $\hat{R}^i(T)$ because $\hat{R}^i(T)$ disregards the regret of $t$-th round when the indicator function is false, whereas $R^i(T)$ accumulates all instantaneous regret. Existing optimal algorithms for single objective SLB exhibit the regret bound $\tilde{O}(d\sqrt{T})$ (Dani et al., 2008; Abbasi-yadkori et al., 2011). Therefore, a compelling and non-trivial challenge is to achieve the regret bound $O(d\sqrt{T})$ for the MOSLB under lexicographic ordering. In line with the standard SLB model (Dani et al., 2008), the sequence of decision sets $\{D_1, D_2, \ldots, D_T\}$ are compact and determined before the game starts. Thus, we claim that there exists some $\lambda \geq 0$, the expected rewards of different objectives satisfy $$\langle \theta^*_i, x - x^*_t \rangle \leq \lambda \cdot \max_{j \in [i-1]} \langle \theta^*_j, x^*_t - x \rangle, i = 2, 3, \ldots, m$$ for any $x \in D_t, t \in [T]$. Appendix A shows our claim is true. We want to emphasize two important properties of the proposed parameter $\lambda$. Firstly, measuring the relative rate at which different objective values change with respect to the decision is sufficient to provide an upper bound for $\lambda$. To illustrate this point, we provide a simple example involving two objectives and a fixed arm set $D$. For any $x, x' \in D$, if $|\langle \theta^*_1, x - x' \rangle| \geq L_1$, can guarantee $|\langle \theta^*_2, x - x' \rangle| \leq L_2$, then we have $\lambda \leq L_2/L_1$. $L_2/L_1$ is feasible as different objectives are related to each other in various applications, such as water resource planning (Weber et al., 2002) and radiation treatment for cancer patients (Lee et al., 2007). Secondly, $\lambda$ captures the complexity of identifying the optimal arm $x^*_t$ within $D_t$. Specifically, if $\lambda$ is exceptionally large, there exists $x \in D_t$ that yields substantially larger rewards than the optimal arm $x^*_t$ for the $i$-th objective, while maintaining similar rewards for the preceding $i-1$ objectives, making the identification of the optimal arm challenging. For a positive integer $i$, $[i]$ denotes the set $\{1, 2, \ldots, i\}$. To the best of our knowledge, this paper is the first attempt to investigate the MOSLB model under lexicographic ordering. With the prior knowledge $\lambda$, we develop an algorithm that attains a general regret bound of $\tilde{O}((\lambda^{i-1} + 1)d\sqrt{T})$ for the $i$-th objective, $i \in [m]$. This bound is almost optimal in terms of $d$ and $T$, as the lower bound for the single objective SLB problem is $\Omega(d\sqrt{T})$ (Dani et al., 2008). Our algorithm improves upon the previous bound $\tilde{O}((KT)^{2/3})$ in the most recent study of Huyük & Tekin (2021), which focused on the MOMAB model. Furthermore, we extend the metric of the lexicographically ordered multiobjective bandit problem from the priority-based regret (1) to the general regret (4), which more accurately evaluates the performance of algorithms. The main innovations of our algorithm include a new arm filter and a multiple trade-off approach for exploration and exploitation, which can be easily adapted to other bandit models, such as generalized linear bandits (Jin et al., 2017) and Lipschitz bandits (Bubeck et al., 2011). 2 RELATED WORK In this section, we provide a literature review on stochastic bandits and multiobjective bandits. Throughout the paper, $\|x\|$ is the $\ell_2$-norm of vector $x \in \mathbb{R}^d$. Additionally, the induced norm of $x$ by a positive definite matrix $V \in \mathbb{R}^{d \times d}$ is denoted as $\|x\|_V = \sqrt{x^\top V x}$. 2.1 STOCHASTIC BANDITS The seminal work of Lai & Robbins (1985) not only introduced a stochastic MAB algorithm with a regret bound of $O(K \log T)$ but also established a matching lower bound. Auer (2002) extended the bandit algorithm to the linear model with finite arms and developed the SupLinRel algorithm, which employs a sophisticated device to decouple reward dependence, yielding a regret bound of $\tilde{O}(\sqrt{dT})$. In the context of infinite-armed stochastic linear bandits, Dani et al. (2008) first applied the confidence region technique to deduce the upper confidence bound for the expected rewards of infinite arms, resulting in a regret bound of $\tilde{O}(d\sqrt{T})$ that matches the given lower bound $\Omega(d\sqrt{T})$. A subsequent study by Abbasi-yadkori et al. (2011) offered a new analysis for the algorithm of Dani et al. (2008) and enhanced the regret bound by a logarithmic factor. The Upper Confidence Bound (UCB) framework is a widely-used technique for balancing exploration and exploitation in the decision-making process, which first computes the confidence bound of forthcoming rewards through historical trials and then selects the arm with the highest upper confidence bound (Auer et al., 2002; Abbasi-yadkori et al., 2011; Bubeck et al., 2015; Hu et al., 2021; Li et al., 2022; Masoudian et al., 2022; Feng et al., 2022; Jin et al., 2022). To illustrate the UCB technique utilized in the SLB model, we take the classical algorithm OFUL as an example (Abbasi-yadkori et al., 2011). With trials up to the $t$-th round, OFUL minimizes the square loss of the action-reward pairs $\{(x_1, y_1), (x_2, y_2), \ldots, (x_{t-1}, y_{t-1})\}$ to estimate the inherent parameters $\theta^*$, such that, $$\hat{\theta}_t = \arg\min_{\theta \in \mathbb{R}^d} \|X_t \theta - Y_t\|^2 + \|\theta\|^2$$ (6) where $X_t = [x_1, x_2, \ldots, x_{t-1}] \in \mathbb{R}^{(t-1) \times d}$ is the matrix composed of selected arm vectors, and $Y_t = [y_1, y_2, \ldots, y_{t-1}] \in \mathbb{R}^{(t-1) \times 1}$ is the vector composed of historical rewards. Using the estimator $\hat{\theta}_t$, OFUL constructs a confidence region $C_t$ where the inherent parameter lies in with high probability, such that $$C_t = \{\theta | \|\theta - \hat{\theta}_t\|_V \leq \alpha_t\}$$ (7) where $\alpha_t = O(\sqrt{d \log(t)})$ and $V_t = X_t^\top X_t + I_d$. Finally, OFUL selects the most promising arm $x_t$ through bilinear optimization, $$(x_t, \hat{\theta}_t) = \arg\max_{x \in D_t, \theta \in C_t} \langle x, \theta \rangle.$$ (8) Considering that the confidence region $C_t$ is an ellipse, a simple application of the Lagrange method shows that the upper confidence bound for the arm $x \in D_t$ is $$u_t(x) = \langle \hat{\theta}_t, x \rangle + \alpha_t \|x\|_V^{-1},$$ (9) where $\langle \hat{\theta}_t, x \rangle$ is an unbiased estimation of $\langle \theta^*, x \rangle$, and $\alpha_t \|x\|_V^{-1}$ is the width of the confidence interval, indicating the uncertainty of $\langle \hat{\theta}_t, x \rangle$ (Zhang et al., 2016; Boyd & Vandenberghe, 2004). 2.2 Multiobjective Bandits The MOMAB problem was initially investigated by Drugan & Nowe (2013), who proposed two UCB-based algorithms that achieve regret bounds of \(O(K \log T)\) under the Pareto regret metric and scalarized regret metric, respectively. The Pareto regret measures the cumulative distance between the obtained reward vectors and the Pareto optimal rewards, while the scalarized regret is the weighted regret of all objectives (Drugan & Nowe, 2013). To leverage environmental side information, Turgay et al. (2018) examined the multiobjective contextual bandit model, where the expected reward satisfies the Lipschitz condition with respect to contextual vectors. Lu et al. (2019) developed an algorithm with a Pareto regret bound of \(O(d\sqrt{T})\) for the multiobjective generalized linear bandit model. Another research direction focuses on designing algorithms from the perspective of best arm identification, with the primary goal of identifying Pareto optimal arms within a limited budget (Van Moffaert et al., 2014; Auer et al., 2016). Huyuk & Tekin (2021) is the only study for the multiobjective bandit problem under lexicographic ordering. They presented the PF-LEX algorithm for the MOMAB model, whose regret bound is \(O((KT)^{2/3})\) based on the priority-based regret metric (1). However, this result is inferior to existing single objective MAB algorithms, which attain a regret bound of \(O(K \log T)\) (Lai & Robbins, 1985). The intuitive idea to settle the lexicographically ordered issue for the multiobjective bandit model is to sequentially filter the arms according to the priority among objectives (Ehrgott, 2005; Huyuk & Tekin, 2021). To further illustrate this idea, we introduce the PF-LEX algorithm (Huyuk & Tekin, 2021). At each round \(t\), PF-LEX first calculates confidence intervals for expected rewards through the historical trials. Specifically, the estimated reward of arm \(a \in [K]\) in the \(i\)-th objective is given by \(\hat{\mu}_t^i(a) = \sum_{\tau=1}^{t-1} y_t^i(a_\tau)I(a_\tau = a)/N_t(a)\), where \(a_\tau\) represents the arm played at round \(\tau\) and \(N_t(a)\) denotes the number of times arm \(a\) has been played up to round \(t\). Thus, the \(i\)-th confidence intervals for arm \(a \in [K]\) is \[ [\hat{\mu}_t^i(a) - w_t(a), \hat{\mu}_t^i(a) + w_t(a)] \] where \(w_t(a) = \beta_t \sqrt{(1 + N_t(a))/N_t^2(a)}\) and \(\beta_t = O(\sqrt{\log(Kmt)})\). Subsequently, PF-LEX either chooses the arm with a wide confidence interval to explore potentially better arms or selects the arm that is almost optimal in all objectives. Precisely, if some arm \(a_t \in [K]\) satisfies \(w_t(a_t) > \epsilon\) for a given criteria \(\epsilon > 0\), PF-LEX chooses arm \(a_t\). On the other hand, if \(w_t(a) < \epsilon\) for all arms \(a \in [K]\), PF-LEX filters the promising arms through the chain relation. Starting from \(A_t^0 = [K]\), PF-LEX operates as follows, \[ \hat{a}_t^i = \arg\max_{a \in A_t^{i-1}} u_t^i(a), A_t^i = \{a \in A_t^{i-1} | aC_i \hat{a}_t^i\}, i \in [m]. \] Here, \(u_t^i(a) = \hat{\mu}_t^i(a) + w_t(a)\) and \(aC_i \hat{a}_t^i\) denotes that arm \(a\) and \(\hat{a}_t^i\) are chained in the \(i\)-th objective, such that there exists a sequence of arms \(\{a_1, b_1, b_2, \ldots, b_n, \hat{a}_t^i\} \subseteq [K]\), the \(i\)-th confidence intervals of adjacent arms are intersected. Finally, PF-LEX selects arm \(\hat{a}_t^m\). 3 Algorithms In this section, we first extend the MOMAB algorithm proposed in Huyuk & Tekin (2021) to the MOSLB model as a warm-up and then provide an improved algorithm that achieves the almost optimal regret. Without loss of generality, we assume the arm vectors and inherent parameters are restricted in the unit sphere, such that \(\|x\| \leq 1\) for any \(x \in D_t, t \in [T]\) and \(\|\theta_*\| \leq 1, i \in [m]\). 3.1 Warm-up: STE\(^2\)LO As a warm-up, we introduce the Single Trade-off between Exploration and Exploitation under Lexicographic Ordering (STE\(^2\)LO) algorithm, which is a simple extension of PF-LEX (Huyuk & Tekin, 2021). Given an input parameter \(\epsilon > 0\), STE\(^2\)LO divides the decision-making operation at each round into two cases: pure exploration case and exploration-exploitation trade-off case. We give a formal definition of the chain relation to facilitate our presentation. Given any arms \(z_1, z_n \in D_t\), we say that \(z_1\) and \(z_n\) are chained in the \(i\)-th objective, denoted by \(z_1C_i z_n\), if and only if there exists a sequence of arms \(\{z_1, z_2, \ldots, z_n\} \subseteq D_t\) satisfying the condition that the Algorithm 1 Single Trade-off between Exploration and Exploitation under Lexicographic Ordering (STE\textsuperscript{2}LO) Input: time horizon $T \in \mathbb{N}$, confidence parameter $\delta \in (0, 1)$, exploration criterion $\epsilon > 0$ 1: Initialize $V_1 = I_d$ and $\theta^*_i = 0$, $i \in [m]$. 2: for $t = 1, 2, \ldots, T$ do 3: Compute the estimated rewards and width of confidence intervals for any arm $x \in D_t$: $$\hat{y}_t^i(x) = \langle \theta_t^i, x \rangle, \forall i \in [m], w_t(x) = \gamma_t \|x\|_{V_t^{-1}}$$ where $\gamma_t = R\sqrt{d \ln(m(1 + t)/\delta)} + 1$ 4: if $w_t(x_t) > \epsilon$ for some $x_t \in D_t$ then 5: Play the arm $x_t$ and observe $[y_t^1, y_t^2, \ldots, y_t^m]$ 6: else $w_t(x) \leq \epsilon \quad \forall x \in D_t$ 7: Initialize $D_t^0 = D_t$ 8: for $i = 1, 2, \ldots, m$ do 9: $\hat{x}_t^i = \arg \max_{x \in D_t^{i-1}} \hat{y}_t^i(x) + w_t(x), D_t^i = \{x \in D_t^{i-1} | x C_i \hat{x}_t^i\}$ 10: end for 11: Play the arm $x_t = \hat{x}_t^m$ and observe $[y_t^1, y_t^2, \ldots, y_t^m]$ 12: end if 13: Update $V_{t+1} = V_t + x_t x_t^\top, X_{t+1} = [x_\tau]_{\tau \in [t]}$ and $Y_{t+1}^i = [y_\tau^i]_{\tau \in [t]}, i \in [m]$ 14: Update the estimators $\theta_{t+1}^i = V_{t+1}^{-1} X_{t+1} Y_{t+1}^i, i \in [m]$ 15: end for confidence intervals of adjacent arms in the $i$-th objective are intersected, i.e., $[\ell_t^i(z_j), u_t^i(z_j)] \cap [\ell_t^i(z_{j+1}), u_t^i(z_{j+1})] \neq \emptyset, \forall j \in [n - 1]$. Here, $\ell_t^i(\cdot)$ and $u_t^i(\cdot)$ denote the lower and upper confidence bounds for the $i$-th objective at epoch $t$, respectively. At epoch $t$, considering that the agent receives $m$ values per epoch, SCE\textsuperscript{2}LO performs least square estimation on each value sequence to estimate the unknown parameters $\{\theta^*_1, \theta^*_2, \ldots, \theta^*_m\}$, such that, $$\hat{\theta}_t^i = \arg \min_{\theta \in \mathbb{R}^d} \|X_t \theta - Y_t^i\|^2 + \|\theta\|^2, i \in [m]$$ where $X_t = [x_\tau]_{\tau \in [t-1]} \in \mathbb{R}^{(t-1) \times d}$ is the matrix of selected arms, and $Y_t^i = [y_\tau^i]_{\tau \in [t-1]} \in \mathbb{R}^{(t-1) \times 1}$ is the $i$-th historical rewards vector. Using a variant of the self-normalized bound for martingales (Abbasi-yadkori et al., 2011), the estimated rewards and the confidence interval width for any arm $x \in D_t$ can be calculated as $$\hat{y}_t^i(x) = \langle \theta_t^i, x \rangle, w_t(x) = \gamma_t \|x\|_{V_t^{-1}}, i \in [m]$$ where $\gamma_t = R\sqrt{d \ln(m(1 + t)/\delta)} + 1$ and $V_t = I_d + X_t X_t^\top$. Thus, for the arm $x \in D_t$, the confidence interval for the expected reward $\langle \theta_t^*, x \rangle$ is $$[\ell_t^i(x), u_t^i(x)] = [\hat{y}_t^i(x) - w_t(x), \hat{y}_t^i(x) + w_t(x)].$$ The wider confidence interval implies higher uncertainty in the estimate of expected reward, requiring the arm to be pulled to obtain more information. Therefore, if there exists some $x_t \in D_t$ that has a confidence interval wider than the input parameter $\epsilon$, i.e., $w_t(x_t) > \epsilon$, SCE\textsuperscript{2}LO plays the arm $x_t$ as pure exploration. In contrast, if all arms have narrow confidence intervals, i.e., $w_t(x) \leq \epsilon, \forall x \in D_t$, SCE\textsuperscript{2}LO tends to play the arm with the highest upper confidence bound in all objectives to balance exploration and exploitation. However, the arm with the highest upper confidence bound may vary for different objectives, preventing simultaneous maximization of all objectives. Considering the importance of different objectives, SCE\textsuperscript{2}LO filters the arms from the first objective to the last objective sequentially. More precisely, starts from $D_t^0 = D_t$, SCE\textsuperscript{2}LO filters the arm set through the filtering mechanism below, $$\hat{x}_t^i = \arg \max_{x \in D_t^{i-1}} \hat{y}_t^i(x) + w_t(x), D_t^i = \{x \in D_t^{i-1} | x C_i \hat{x}_t^i\}, i \in [m]$$ where $\hat{x}_t^i$ is the arm with the highest upper confidence bound in the $i$-th objective, and $x C_i \hat{x}_t^i$ selects the arms chained with the arm $\hat{x}_t^i$. After filtering on the last objective, SCE\textsuperscript{2}LO plays the arm $x_t^m$ Algorithm 2 Lexicographically Ordered Arm Filter (LOAF) Input: arm set \( D_t \), scalarized parameter \( \lambda \), maximum confidence intervals width \( W \), upper confidence bound \( u_i^t(x) \) for all \( x \in D_t \) and \( i \in [m] \). 1: Initialize the arm set \( D_0^i = D_t \) 2: for \( i = 1, 2, \ldots, m \) do 3: \[ \hat{x}_t^i = \arg\max_{x \in D_t^{i-1}} u_i^t(x) \] 4: \[ D_i^t = \{ x \in D_t^{i-1} | u_i^t(x) \geq u_i^t(\hat{x}^*) - (2 + 4\lambda + \ldots + 4\lambda^{i-1})W \} \] 5: end for 6: Return the filtered arm set \( D_t^m \) and observes the reward vector \([y_1^t, y_2^t, \ldots, y_m^t]\). Finally, for \( i \in [m] \), SCE\(^2\)LO updates the estimator from \( \hat{\theta}_i^t \) to \( \hat{\theta}_{i+1}^t \) with the updated contextual information matrix \( X_{t+1} \) and historical rewards vector \( Y_{t+1}^i \). The following theorem establishes the theoretical guarantees for the SCE\(^2\)LO algorithm. Theorem 1 Suppose that (2) and (3) hold, and the arm sets are finite, i.e., \( |D_t| = K, \forall t \in [T] \). If STE\(^2\)LO is run with \( \delta \in (0, 1) \) and \( \epsilon > 0 \), then with probability at least \( 1 - \delta \), STE\(^2\)LO satisfies \[ \hat{R}^i(T) \leq 100\epsilon^{-2}d \ln T \left( R^2 d \ln \left( m(1 + T)/\delta \right) + 1 \right) + 2KT\epsilon, \quad \forall i \in [m] \] where \( \hat{R}^i(T) = \sum_{t=1}^{T} (\theta_*^j, x_t^* - x_t) \mathbb{I}((\theta_*^j, x_t^*) = (\theta_*^j, x_t), 1 \leq j \leq i-1) \) is a priority-based regret. Remark: By setting the input parameter \( \epsilon = d^{2/3}(KT)^{-1/3} \), Theorem 1 implies that STE\(^2\)LO can achieve a \( O((dT/K)^{2/3}) \) bound without requiring any prior knowledge. This bound matches the bound of the existing algorithm PF-LEX in terms of \( K \) and \( T \) (Hüyük & Tekin 2021). STE\(^2\)LO allows the arm set \( D_t \) to vary during the learning process, a distinguishing feature from PF-LEX that lacks such flexibility. However, there are two limitations for this algorithm. First, STE\(^2\)LO is suboptimal as the lower regret bound for single objective SLB model is \( \Omega(d/\sqrt{T}) \) (Dani et al. 2008). Second, the priority-based regret \( \hat{R}^i(T) \) relies on the indicator function \( \mathbb{I}(\cdot) \), which only measures the performance of the first objective when \( \langle \theta_*^1, x_t \rangle < \langle \theta_*^1, x_t^* \rangle \). 3.2 IMPROVED ALGORITHM: MTE\(^2\)LO Although STE\(^2\)LO is straightforward, its regret bound is suboptimal even with the priority-based metric \( \hat{R}^i(T) \). In this section, we introduce an improved algorithm called MTE\(^2\)LO, which achieves the almost optimal bound on the general regret \( R^i(T) \). To motivate the development of MTE\(^2\)LO, we first briefly explain why the simple algorithm STE\(^2\)LO is suboptimal. One limitation of STE\(^2\)LO is the use of the chain relation \( C_i \), which may result in the absence of the lexicographic optimal arm \( x_t^* \) from \( D_t^{i-1} \) to \( D_t^i \) for some \( i \in [m] \). To illustrate this issue, we present a simple example with two objectives in Fig. 1. In this example, there are three arms, where the red point \( x_t^* \) represents the lexicographically optimal one. The square denotes the confidence intervals for the first and second objectives. Clearly, \( x_t^1 = \hat{x}_t^1 \), and \( D_t^1 \) contains both \( x_t^* \) and \( x \) since their confidence intervals for the first objective intersect. However, \( D_t^2 \) loses \( x_t^* \) because \( \hat{x}_t^2 = x \) and \( \hat{x}_t^2 \) is not chained with \( x_t^* \) in the second objective. To remove this limitation, we observe that the confidence interval width for both objectives is equal for a fixed arm, and scaling the confidence interval of the second objective ensures the intersection of confidence intervals. Motivated by this observation, we design a novel Lexicographically Ordered Arm Filter (LOAF), which filters promising arms without losing the optimal arm, as detailed in Algorithm 2. LOAF sequentially refines promising arms from the first objective to the last objective by the upper confidence bounds shown in the following equation, \[ u_i^t(x) = \hat{y}_i^t(x) + w_t(x), i \in [m] \] (16) Algorithm 3 Multiple Trade-off between Exploration and Exploitation under Lexicographic Ordering (MTE\textsuperscript{2}LO) **Input:** time horizon $T \in \mathbb{N}$, scalarized parameter $\lambda$ 1: Initialize $S = \lfloor \ln T \rfloor$, $V_0 = I_d$ and $\theta^i_0 = 0$, $i \in [m]$. 2: for $t = 1, 2, \ldots, T$ do 3: Compute the estimated rewards and width of confidence intervals for any arm $x \in D_t$: $$\hat{y}_t(x) = (\theta^i_t, x), \forall i \in [m], w_t(x) = \gamma_t \|x\|_{V_t^{-1}}$$ where $\gamma_t = R \sqrt{d \ln(m(1 + t)/\delta)} + 1$ 4: Initialize $s = 1$, $D_{t,1} = D_t$ 5: repeat 6: if $w_t(x) \leq 1/\sqrt{T} \quad \forall x \in D_{t,s}$ then 7: Invoke the Algorithm 2 to filter the promising arms $D_{t,s} = \text{LOAF}(\lambda, 1/\sqrt{T}, D_{t,s})$ 8: Play the arm $x_t = \arg\max_{x \in D_{t,s}} \hat{y}_t(x) + w_t(x)$ and observe $[y^1_t, y^2_t, \ldots, y^m_t]$ 9: else if $w_t(x) > 2^{-s}$ for some $x_t \in D_{t,s}$ then 10: Play the arm $x_t$ and observe $[y^1_t, y^2_t, \ldots, y^m_t]$ 11: else $w_t(x) \leq 2^{-s} \quad \forall x \in D_{t,s}$ 12: Invoke the Algorithm 2 to filter the promising arms $D_{t,s+1} = \text{LOAF}(\lambda, 2^{-s}, D_{t,s})$ 13: Update $s = s + 1$ 14: end if 15: until an arm $x_t$ is played. 16: Update $V_{t+1} = V_t + x_t x_t^\top$, $X_{t+1} = [x_\tau]_{\tau \in [t]}$ and $Y_{t+1} = [y_\tau]_{\tau \in [t]}, i \in [m]$ 17: Update the estimators $\hat{\theta}^i_{t+1} = V_{t+1}^{-1} X_{t+1} Y_{t+1}, i \in [m]$ 18: end for where $\hat{y}_t(x)$ and $w_t(x)$ are the estimated reward and confidence interval width in (13). For the $i$-th objective, LOAF selects the most promising arms from the previous arm set $D^{i-1}_t$ as follows, $$\hat{x}^*_t = \arg\max_{x \in D^{i-1}_t} u^i_t(x)$$ where the initialized arm set $D^0_t = D_t$. Then, LOAF retains the arms that are not far away from the arm $\hat{x}^*_t$ in the $i$-th objective through the intersection of scalarized confidence intervals, i.e., $$D^i_t = \{ x \in D^{i-1}_t | u^i_t(x) \geq u^i_t(\hat{x}^*_t) - (2 + 4\lambda + \ldots + 4\lambda^{i-1})W \}$$ where $\lambda$ is the scalarized parameter in the assumption (5), and $W$ is the maximum width of confidence intervals among the input arms. LOAF not only keeps the optimal arm in the returned arm set $D^m_t$ but also ensures the expected rewards of arms in $D^m_t$ are close to the optimal arm across all objectives. The following proposition supports this claim. **Proposition 1** For the algorithm LOAF, suppose that (5) holds and the expected rewards are contained within confidence intervals (14) with probability at least $1 - \delta$. If $x^*_t$ is the optimal arm of the input arm set $D_t$ and $W$ is the maximum width of confidence intervals for the input arms, then with probability at least $1 - \delta$, $x^*_t$ belongs to the set $D^m_t$, and $$\langle \theta^*, x^*_t - x \rangle \leq 4(1 + \lambda + \ldots + \lambda^{i-1})W, i \in [m], \forall x \in D^m_t.$$ **Remark:** Proposition 1 demonstrates that LOAF returns an arm set $D^m_t$ that contains the optimal arm $x^*_t$. Meanwhile, it establishes a bound on the gap between the expected rewards of the optimal arm $x^*_t$ and any other arm within the set $D^m_t$. This bound is $O((1 + \lambda^{i-1})W)$, which increase exponentially as the index of objectives grows. However, most multiobjective problems typically involve two or three objectives (Deb & Jain, 2014; Li et al., 2015), thus $\lambda^{i-1}$ will not be extreme. Another drawback of STE\textsuperscript{2}LO is its high trial consumption in the case $w_t(x_t) > \epsilon$, which is a pure exploration case without any exploitation. To settle this issue, we divide the decision-making operation at each round into $S$ stages to make a delicate trade-off between exploration and exploitation. The proposed algorithm, Multiple Trade-off between Exploration and Exploitation under Lexicographic Ordering (MTE\textsuperscript{2}LO), is shown in Algorithm 3. MTE\textsuperscript{2}LO adopts a framework similar to STE\textsuperscript{2}LO, but takes a more delicate decision-making process. At each time step $t$, MTE\textsuperscript{2}LO first calculates the estimated rewards and confidence interval Table 1: Expected reward vectors for $\lambda = 0.1$ and $\lambda = 10$ | Arms | $\lambda = 0.1$ | $\lambda = 10$ | |------|----------------|---------------| | Arm 1 | $(0.42, -0.11, 0.06, -0.27, 0.41)$ | $(0.33, 0.50, 0.20, -0.23, -0.03)$ | | Arm 2 | $(0.42, -0.24, -0.22, -0.48, 0.00)$ | $(0.33, 0.34, -0.18, 0.24, -0.13)$ | | Arm 3 | $(0.17, -0.24, -0.40, -0.38, 0.34)$ | $(-0.06, 0.34, 0.20, -0.21, 0.31)$ | | Arm 4 | $(-0.37, -0.07, -0.40, -0.27, -0.02)$ | $(0.28, 0.15, 0.20, -0.46, 0.18)$ | | Arm 5 | $(-0.14, -0.12, -0.09, -0.27, 0.13)$ | $(0.24, 0.29, -0.18, -0.46, 0.03)$ | | Arm 6 | $(0.22, -0.38, -0.26, -0.50, 0.13)$ | $(0.00, -0.29, -0.26, 0.43, 0.03)$ | | Arm 7 | $(0.30, -0.18, -0.52, -0.75, 0.08)$ | $(-0.16, 0.45, 0.40, -0.22, 0.20)$ | | Arm 8 | $(-0.06, -0.33, -0.56, -0.42, 0.10)$ | $(-0.22, -0.30, 0.03, -0.16, -0.15)$ | | Arm 9 | $(-0.23, -0.30, -0.66, -0.33, 0.35)$ | $(0.19, -0.16, -0.18, -0.06, -0.14)$ | | Arm 10 | $(0.40, -0.40, -0.14, -0.38, 0.07)$ | $(-0.35, -0.10, 0.40, 0.02, -0.08)$ | width for each arm in $\mathcal{D}_t$, using the formula [13]. Subsequently, MTE$^2$LO initiates a loop of $S$ stages to iteratively refine the promising arms, starting with $\mathcal{D}_{t,1} = \mathcal{D}_t$. At each stage $s$, MTE$^2$LO first checks if the confidence interval widths for all arms in $\mathcal{D}_{t,s}$ is less than or equal to $1/\sqrt{T}$. If this is the case, MTE$^2$LO invokes the LOAF algorithm with the input arm set $\mathcal{D}_{t,s}$ and maximum confidence interval width $1/\sqrt{T}$, obtaining the promising arms set $\mathcal{D}_{t,s}^m$. Then, MTE$^2$LO plays the arm with the highest upper confidence bound at the $m$-th objective from $\mathcal{D}_{t,s}^m$ and records its rewards. Alternatively, if the confidence interval width of some arm in $\mathcal{D}_{t,s}$ exceeds $2^{-s}$, MTE$^2$LO plays this arm for exploration and records its rewards. Lastly, if the widths of all confidence intervals of the arms in $\mathcal{D}_{t,s}$ are less than or equal to $2^{-s}$, MTE$^2$LO applies the LOAF algorithm with the input arm set $\mathcal{D}_{t,s}$ and maximum confidence interval width $2^{-s}$ to update the promising arms set from $\mathcal{D}_{t,s}$ to $\mathcal{D}_{t,s+1}$. The last case balances exploration and exploitation because the maximum confidence interval width $2^{-s}$ promotes exploration, and the intersection of salarized confidence intervals in LOAF promises exploitation. Let the total number of stages $S = \lfloor \ln T \rfloor$, then $2^{-S} < 1/\sqrt{T}$. Thus, MTE$^2$LO plays an arm before the decision-making loop ends. After playing an arm and observing its rewards, MTE$^2$LO updates the estimators $\hat{\theta}_{i+1}, i \in [m]$. The following theorem guarantees the performance of MTE$^2$LO. **Theorem 2** Suppose that [2], [3] and [5] hold. If MTE$^2$LO is run with $\delta \in (0, 1)$, then with probability at least $1 - \delta$, the regret of MTE$^2$LO satisfies $$R_i(T) \leq 8(1 + \lambda + \ldots + \lambda^{i-1}) \left( \sqrt{T} + 5d \ln T \left( R \sqrt{\ln (m(1+T)/\delta)} + 1 \right) \sqrt{T} \right), \forall i \in [m].$$ **Remark:** Theorem 2 states that MTE$^2$LO achieves the $\tilde{O}((1 + \lambda^{i-1})d\sqrt{T})$ bound for the $i$-th objective, which is consistent with the optimal regret of single objective SLB algorithms in terms of the factors $d$ and $T$ [Dani et al., 2008; Abbasi-yadkori et al., 2011]. In addition, the above theorem adopts the general regret $R_i^g(T)$, which measures the performance of each objective more accurately than the priority-based regret $R_i^p(T)$. Huyuk & Tekin [2021] established an expected lower regret bound $\Omega(T^{2/3})$ for MOMAB under lexicographic ordering. This does not conflict with our result since we consider pseudo-regret instead of expected regret [Lattimore & Szepesvari, 2020]. ### 4 EXPERIMENTS In this section, we conduct experiments to present the empirical performance of our proposed algorithms. We adopt PF-LEX [Huyuk & Tekin, 2021] and OFUL [Abbasi-yadkori et al., 2011] as baselines, where PF-LEX is designed for MOMAB under lexicographic ordering, and OFUL is designed for single objective SLB model. Following the existing experimental setup [Lu et al., 2019], we set the objective number $m = 5$ and feature dimension $d = 10$. The arm sets are fixed as $\mathcal{D}_t = \mathcal{D}$ for $t \geq 1$, and the arm number matches the feature dimension ($|\mathcal{D}| = 10$), which ensures that PF-LEX and our proposed algorithms... encounter the same number of unknown parameters. The coefficients $\theta_i^*$ for $i \in [m]$ and arm $x \in D$ are uniformly sampled from the unit sphere.\footnote{The unit sphere is defined as the set $\{x \in \mathbb{R}^d ||x|| \leq 1\}$.} Let the first sampled arm be the lexicographic optimal arm. We set the remaining nine arms to satisfy two conditions to distinguish the lexicographically ordered bandit problem from the single objective bandit problem. Firstly, there are two arms with equal expected rewards for each objective. Secondly, all arms satisfy the claim (5), and $\lambda$ is set as 0.1 and 10 respectively to demonstrate the performance of all algorithms across different problem difficulties. The expected reward vectors are summarized in Table 1. For the chosen arm $x_t \in D$, the reward for the $i$-th objective is given as $(\theta_i^*, x_t) + \eta_t$, where $\eta_t$ is sampled from a Gaussian distribution with mean 0 and variance 1. We set $T = 10^5$ and $\delta = 0.01$ for all algorithms. To reduce the randomness across the algorithms, we repeated each algorithm ten times and reported the average regret. The exploration parameter $\epsilon$ for STE$^2$LO and PF-LEX is set to $d^{2/3}(KT)^{-1/3}$ and $(KT)^{-1/3}$, respectively, which are theoretically optimal. In line with the common practice in bandit learning, we fine-tune the scaled parameters $\alpha_t, \beta_t,$ and $\gamma_t$ of the confidence interval width in (9), (10), and (13), within the range of $[1e^{-3}, 1]$ (Jun et al., 2017; Lu et al., 2019). Fig. 2 displays the general regret for the 1st and 5th objectives, where Fig. 2(a) and Fig. 2(b) present the results for the problem instances with $\lambda = 0.1$ and $\lambda = 10$, respectively. For $\lambda = 0.1$, OFUL performs the best in the first objective but performs worst in the fifth objective, as it is specifically designed for single objective bandit model. MTE$^2$LO exhibits comparable performance to OFUL in the first objective but significantly outperforms it in the fifth objective. STE$^2$LO performs slightly better than PF-LEX in the first objective but falls behind in the fifth objective. Both STE$^2$LO and PF-LEX are inferior to MTE$^2$LO. A interesting phenomenon is that all algorithms achieve better performance in the fifth objective than in the first when $\lambda = 10$, which is inconsistent with the regret bound in Theorem 2. This peculiarity can be attributed to the fact that most randomly sampled arms have higher expected rewards than the lexicographic optimal arm in the fifth objective for $\lambda = 10$. 5 Conclusion and Future Work We have investigated the MOSLB model under lexicographic ordering and presented two algorithms: STE$^2$LO and MTE$^2$LO. STE$^2$LO is straightforward and independent of prior knowledge, but its regret bound is suboptimal. The improved algorithm MTE$^2$LO achieves an almost optimal regret bound $\tilde{O}((\lambda^{i-1} + 1)\sqrt{T})$ for the $i$-th objective, $i \in [m]$. We extend the metric of lexicographically ordered multiobjective bandits from the priority-based regret (1) to the general regret (4), which more accurately evaluates the performance of algorithms. Our major novelties include a new arm filter and a multiple trade-off approach for exploration and exploitation. These techniques can be easily adapted to other bandit models, such as generalized linear bandits and Lipschitz bandits. In the future, a challenging open problem is to develop an algorithm that is independent of the prior knowledge $\lambda$ and achieves the regret bound $O(d\sqrt{T})$. Moreover, Theorem 2 demonstrates that objectives with lower priority have higher regret bounds, which contradicts the observed performance in Fig. 2(b). Thus, the regret bounds of low-priority objectives may be further reduced. REFERENCES Yasin Abbasi-yadkori, Dávid Pál, and Csaba Szepesvári. Improved algorithms for linear stochastic bandits. In *Advances in Neural Information Processing Systems 24*, pp. 2312–2320, 2011. Rajeev Agrawal. The continuum-armed bandit problem. *SIAM Journal on Control and Optimization*, 33(6):1926–1951, 1995. Ayya Alieva, Ashok Cutkosky, and Abhimanyu Das. Robust pure exploration in linear bandits with limited budget. In *Proceedings of the 38th International Conference on Machine Learning*, pp. 187–195, 2021. Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. *Journal of Machine Learning Research*, 3(11):397–422, 2002. Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. *Machine Learning*, 47(2–3):235–256, 2002. Peter Auer, Chao-Kai Chiang, Ronald Ortner, and Madalina Drugan. Pareto front identification from stochastic bandit feedback. In *Proceedings of the 19th International Conference on Artificial Intelligence and Statistics*, pp. 939–947, 2016. Stephen Boyd and Lieven Vandenberghe. *Convex optimization*. Cambridge university press, 2004. Sébastien Bubeck and Nicolò Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. *Foundations and Trends in Machine Learning*, 5(1):1–122, 2012. Sébastien Bubeck, Gilles Stoltz, and Jia Yuan Yu. Lipschitz bandits without the lipschitz constant. In *Proceedings of the 22nd International Conference on Algorithmic Learning Theory*, pp. 144–158, 2011. Sébastien Bubeck, Ofer Dekel, Tomer Koren, and Yuval Peres. Bandit convex optimization: $\sqrt{T}$ regret in one dimension. In *Proceedings of The 28th Conference on Learning Theory*, pp. 266–278, 2015. Wei Chu, Lihong Li, Lev Reyzin, and Robert Schapire. Contextual bandits with linear payoff functions. In *Proceedings of the 14th International Conference on Artificial Intelligence and Statistics*, pp. 208–214, 2011. Varsha Dani, Thomas P. Hayes, and Sham M. Kakade. Stochastic linear optimization under bandit feedback. In *Proceedings of the 21st Annual Conference on Learning*, pp. 355–366, 2008. Kalyanmoy Deb and Himanshu Jain. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: Solving problems with box constraints. *IEEE Transactions on Evolutionary Computation*, 18(4):577–601, 2014. Madalina M. Drugan and Ann Nowe. Designing multi-objective multi-armed bandits algorithms: A study. In *The 2013 International Joint Conference on Neural Networks (IJCNN)*, pp. 1–8, 2013. Matthias Ehrgott. *Multicriteria Optimization*. Springer-Verlag, Berlin, Heidelberg, 2005. Yasong Feng, Zengfeng Huang, and Tianyu Wang. Lipschitz bandits with batched feedback. In *Advances in Neural Information Processing Systems 35*, pp. 19836–19848, 2022. Avishek Ghosh and Abishek Sankararaman. Breaking the $\sqrt{T}$ barrier: Instance-independent logarithmic regret in stochastic contextual linear bandits. In *Proceedings of the 39th International Conference on Machine Learning*, pp. 7531–7549, 2022. Jiafan He, Dongruo Zhou, Tong Zhang, and Quanquan Gu. Nearly optimal algorithms for linear contextual bandits with adversarial corruptions. In *Advances in Neural Information Processing Systems 35*, pp. 34614–34625, 2022. Jiachen Hu, Xiaoyu Chen, Chi Jin, Lihong Li, and Liwei Wang. Near-optimal representation learning for linear bandits and linear rl. In *Proceedings of the 38th International Conference on Machine Learning*, pp. 4349–4358, 2021.